SUPERVISED TRAINING OF DYNAMICAL NEURAL NETWORKS FOR ASSOCIATIVE MEMORY DESIGN AND IDENTIFICATION OF NONLINEAR MAPS

1994 ◽  
Vol 05 (03) ◽  
pp. 165-180 ◽  
Author(s):  
SUBRAMANIA I. SUDHARSANAN ◽  
MALUR K. SUNDARESHAN

Complexity of implementation has been a major difficulty in the development of gradient descent learning algorithms for dynamical neural networks with feedback and recurrent connections. Some insights from the stability properties of the equilibrium points of the network, which suggest an appropriate tailoring of the sigmoidal nonlinear functions, can however be utilized in obtaining simplified learning rules, as demonstrated in this paper. An analytical proof of convergence of the learning scheme under specific conditions is given and some upper bounds on the adaptation parameters for an efficient implementation of the training procedure are developed. The performance features of the learning algorithm are illustrated by applying it to two problems of importance, viz., design of associative memories and nonlinear input-output mapping. For the first application, a systematic procedure is given for training a network to store multiple memory vectors as its stable equilibrium points, whereas for the second application, specific training rules are developed for a three-layer network architecture comprising a dynamical hidden layer for the identification of nonlinear input-output maps. A comparison with the performance of a standard backpropagation network provides an illustration of the capabilities of the present network architecture and the learning algorithm.

Author(s):  
William C. Carpenter ◽  
Margery E. Hoffman

AbstractThis paper examines the architecture of back-propagation neural networks used as approximators by addressing the interrelationship between the number of training pairs and the number of input, output, and hidden layer nodes required for a good approximation. It concentrates on nets with an input layer, one hidden layer, and one output layer. It shows that many of the currently proposed schemes for selecting network architecture for such nets are deficient. It demonstrates in numerous examples that overdetermined neural networks tend to give good approximations over a region of interest, while underdetermined networks give approximations which can satisfy the training pairs but may give poor approximations over that region of interest. A scheme is presented that adjusts the number of hidden layer nodes in a neural network so as to give an overdetermined approximation. The advantages and disadvantages of using multiple output nodes are discussed. Guidelines for selecting the number of output nodes are presented.


2005 ◽  
Vol 128 (4) ◽  
pp. 773-782 ◽  
Author(s):  
H. S. Tan

The conventional approach to neural network-based aircraft engine fault diagnostics has been mainly via multilayer feed-forward systems with sigmoidal hidden neurons trained by back propagation as well as radial basis function networks. In this paper, we explore two novel approaches to the fault-classification problem using (i) Fourier neural networks, which synthesizes the approximation capability of multidimensional Fourier transforms and gradient-descent learning, and (ii) a class of generalized single hidden layer networks (GSLN), which self-structures via Gram-Schmidt orthonormalization. Using a simulation program for the F404 engine, we generate steady-state engine parameters corresponding to a set of combined two-module deficiencies and require various neural networks to classify the multiple faults. We show that, compared to the conventional network architecture, the Fourier neural network exhibits stronger noise robustness and the GSLNs converge at a much superior speed.


1997 ◽  
Vol 16 (2) ◽  
pp. 109-144 ◽  
Author(s):  
M.O. Tokhi ◽  
R. Wood

This paper presents the development of a neuro-adaptive active noise control (ANC) system. Multi-layered perceptron neural networks with a backpropagation learning algorithm are considered in both the modelling and control contexts. The capabilities of the neural network in modelling dynamical systems are investigated. A feedforward ANC structure is considered for optimum cancellation of broadband noise in a three-dimensional propagation medium. An on-line adaptation and training mechanism allowing a neural network architecture to characterise the optimal controller within the ANC system is developed. The neuro-adaptive ANC algorithm thus developed is implemented within a free-field environment and simulation results verifying its performance are presented and discussed.


2018 ◽  
Author(s):  
Sutedi Sutedi

Diabetes Melitus (DM) is dangerous disease that affect many of the variouslayer of work society. This disease is not easy to accurately recognized by thegeneral society. So we need to develop a system that can identify accurately. Systemis built using neural networks with backpropagation methods and the functionactivation sigmoid. Neural network architecture using 8 input layer, 2 output layerand 5 hidden layer. The results show that this methods succesfully clasifies datadiabetics and non diabetics with near 100% accuracy rate.


Author(s):  
Ahmed Kawther Hussein

<span id="docs-internal-guid-5c723154-7fff-a7b2-3582-b7c2920a9921"><span>Arabic calligraphy is considered a sort of Arabic writing art where letters in Arabic can be written in various curvy or segments styles. The efforts of automating the identification of Arabic calligraphy by using artificial intelligence were less comparing with other languages. Hence, this article proposes using four types of features and a single hidden layer neural network for training on Arabic calligraphy and predicting the type of calligraphy that is used. For neural networks, we compared the case of non-connected input and output layers in extreme learning machine ELM and the case of connected input-output layers in FLN. The prediction accuracy of fast learning machine FLN was superior comparing ELM that showed a variation in the obtained accuracy. </span></span>


Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 147 ◽  
Author(s):  
Jun Ye ◽  
Wenhua Cui

Neural networks are powerful universal approximation tools. They have been utilized for functions/data approximation, classification, pattern recognition, as well as their various applications. Uncertain or interval values result from the incompleteness of measurements, human observation and estimations in the real world. Thus, a neutrosophic number (NsN) can represent both certain and uncertain information in an indeterminate setting and imply a changeable interval depending on its indeterminate ranges. In NsN settings, however, existing interval neural networks cannot deal with uncertain problems with NsNs. Therefore, this original study proposes a neutrosophic compound orthogonal neural network (NCONN) for the first time, containing the NsN weight values, NsN input and output, and hidden layer neutrosophic neuron functions, to approximate neutrosophic functions/NsN data. In the proposed NCONN model, single input and single output neurons are the transmission notes of NsN data and hidden layer neutrosophic neurons are constructed by the compound functions of both the Chebyshev neutrosophic orthogonal polynomial and the neutrosophic sigmoid function. In addition, illustrative and actual examples are provided to verify the effectiveness and learning performance of the proposed NCONN model for approximating neutrosophic nonlinear functions and NsN data. The contribution of this study is that the proposed NCONN can handle the approximation problems of neutrosophic nonlinear functions and NsN data. However, the main advantage is that the proposed NCONN implies a simple learning algorithm, higher speed learning convergence, and higher learning accuracy in indeterminate/NsN environments.


Author(s):  
Qingsong Xu

Extreme learning machine (ELM) is a learning algorithm for single-hidden layer feedforward neural networks. In theory, this algorithm is able to provide good generalization capability at extremely fast learning speed. Comparative studies of benchmark function approximation problems revealed that ELM can learn thousands of times faster than conventional neural network (NN) and can produce good generalization performance in most cases. Unfortunately, the research on damage localization using ELM is limited in the literature. In this chapter, the ELM is extended to the domain of damage localization of plate structures. Its effectiveness in comparison with typical neural networks such as back-propagation neural network (BPNN) and least squares support vector machine (LSSVM) is illustrated through experimental studies. Comparative investigations in terms of learning time and localization accuracy are carried out in detail. It is shown that ELM paves a new way in the domain of plate structure health monitoring. Both advantages and disadvantages of using ELM are discussed.


2000 ◽  
Vol 12 (2) ◽  
pp. 451-472 ◽  
Author(s):  
Fation Sevrani ◽  
Kennichi Abe

In this article we present techniques for designing associative memories to be implemented by a class of synchronous discrete-time neural networks based on a generalization of the brain-state-in-a-box neural model. First, we address the local qualitative properties and global qualitative aspects of the class of neural networks considered. Our approach to the stability analysis of the equilibrium points of the network gives insight into the extent of the domain of attraction for the patterns to be stored as asymptotically stable equilibrium points and is useful in the analysis of the retrieval performance of the network and also for design purposes. By making use of the analysis results as constraints, the design for associative memory is performed by solving a constraint optimization problem whereby each of the stored patterns is guaranteed a substantial domain of attraction. The performance of the designed network is illustrated by means of three specific examples.


Sign in / Sign up

Export Citation Format

Share Document