scholarly journals LYAPUNOV THEORY BASED ADAPTIVE LEARNING ALGORITHM FOR MULTILAYER NEURAL NETWORKS

2014 ◽  
Vol 24 (6) ◽  
pp. 619-636
Author(s):  
Nurettin Acır ◽  
Engin Cemal Mengüç
Author(s):  
A. KARAMI ◽  
H. R. KARIMI ◽  
P. JABEHDAR MARALANI ◽  
B. MOSHIRI

The paper is concerned with the application of wavelet-based neural networks for optimal control of robotic manipulators motion. The model of robotic manipulators with regard to frictions and disturbances is nonlinear and uncertain. Optimal control law is found by the optimization of the Hamilton–Jacobi–Bellman (H-J-B) equation and it shows how wavelet-based neural networks can overcome nonlinearities through optimization without preliminary off-line learning phase. The neural network is learned as on-line and an adaptive learning algorithm is derived from the Lyapunov theory. This is so that both tracking stability and error convergence of the estimation for the nonlinear function can be guaranteed in the closed-loop system. The Lyapunov function for the nonlinear analysis is derived from the user input in terms of a specified quadratic performance index. Simulation results on a three-link robot manipulator show the satisfactory performance of the proposed control schemes even in the presence of large modeling uncertainties and external disturbances. Furthermore, it is shown that the tracking error for wavelet neural networks is less than conventional neural networks.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Syed Saad Azhar Ali ◽  
Muhammad Moinuddin ◽  
Kamran Raza ◽  
Syed Hasan Adil

Radial basis function neural networks are used in a variety of applications such as pattern recognition, nonlinear identification, control and time series prediction. In this paper, the learning algorithm of radial basis function neural networks is analyzed in a feedback structure. The robustness of the learning algorithm is discussed in the presence of uncertainties that might be due to noisy perturbations at the input or to modeling mismatch. An intelligent adaptation rule is developed for the learning rate of RBFNN which gives faster convergence via an estimate of error energy while giving guarantee to thel2stability governed by the upper bounding via small gain theorem. Simulation results are presented to support our theoretical development.


1994 ◽  
Vol 05 (01) ◽  
pp. 67-75 ◽  
Author(s):  
BYOUNG-TAK ZHANG

Much previous work on training multilayer neural networks has attempted to speed up the backpropagation algorithm using more sophisticated weight modification rules, whereby all the given training examples are used in a random or predetermined sequence. In this paper we investigate an alternative approach in which the learning proceeds on an increasing number of selected training examples, starting with a small training set. We derive a measure of criticality of examples and present an incremental learning algorithm that uses this measure to select a critical subset of given examples for solving the particular task. Our experimental results suggest that the method can significantly improve training speed and generalization performance in many real applications of neural networks. This method can be used in conjunction with other variations of gradient descent algorithms.


2013 ◽  
Vol 411-414 ◽  
pp. 1660-1664
Author(s):  
Yan Jun Zhao ◽  
Li LIU

This paper introduces fuzzy neural network technology into the adaptive filter and makes further research on its structure and algorithms. At first, fuzzy rules are determined and the network structure is built by means of dividing fuzzy subspaces. Secondly, membership functions are chosen layers are defined and the network is trained by adaptive learning algorithm. Thirdly, training error is the minimum with repeating debugging. Finally, linking weight, the central value and width of the network membership function is adjusted by using experience of experts. The optimal performance of Adaptive Wiener Filter is realized based on Fuzzy Neural Networks.


1991 ◽  
Vol 3 (1) ◽  
pp. 135-143 ◽  
Author(s):  
Hyuek-Jae Lee ◽  
Soo-Young Lee ◽  
Sang-Yung Shin ◽  
Bo-Yun Koh

TAG (Training by Adaptive Gain) is a new adaptive learning algorithm developed for optical implementation of large-scale artificial neural networks. For fully interconnected single-layer neural networks with N input and M output neurons TAG contains two different types of interconnections, i.e., M N global fixed interconnections and N + M adaptive gain controls. For two-dimensional input patterns the former may be achieved by multifacet holograms, and the latter by spatial light modulators (SLMs). For the same number of input and output neurons TAG requires much less adaptive elements, and provides a possibility for large-scale optical implementation at some sacrifice in performance as compared to the perceptron. The training algorithm is based on gradient descent and error backpropagation, and is easily extensible to multilayer architecture. Computer simulation demonstrates reasonable performance of TAG compared to perceptron performance. An electrooptical implementation of TAG is also proposed.


2013 ◽  
Vol 37 ◽  
pp. 182-188 ◽  
Author(s):  
Bernard Widrow ◽  
Aaron Greenblatt ◽  
Youngsik Kim ◽  
Dookun Park

Sign in / Sign up

Export Citation Format

Share Document