New accelerated learning algorithm motivated from novel shape of error surfaces for multilayer feedforward neural networks

Author(s):  
Seung-Joon Lee ◽  
Dong-Jo Park
1994 ◽  
Vol 05 (01) ◽  
pp. 67-75 ◽  
Author(s):  
BYOUNG-TAK ZHANG

Much previous work on training multilayer neural networks has attempted to speed up the backpropagation algorithm using more sophisticated weight modification rules, whereby all the given training examples are used in a random or predetermined sequence. In this paper we investigate an alternative approach in which the learning proceeds on an increasing number of selected training examples, starting with a small training set. We derive a measure of criticality of examples and present an incremental learning algorithm that uses this measure to select a critical subset of given examples for solving the particular task. Our experimental results suggest that the method can significantly improve training speed and generalization performance in many real applications of neural networks. This method can be used in conjunction with other variations of gradient descent algorithms.


2002 ◽  
Vol 12 (01) ◽  
pp. 45-67 ◽  
Author(s):  
M. R. MEYBODI ◽  
H. BEIGY

One popular learning algorithm for feedforward neural networks is the backpropagation (BP) algorithm which includes parameters, learning rate (η), momentum factor (α) and steepness parameter (λ). The appropriate selections of these parameters have large effects on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase speed of convergence. In this paper, we shall present several classes of learning automata based solutions to the problem of adaptation of BP algorithm parameters. By interconnection of learning automata to the feedforward neural networks, we use learning automata scheme for adjusting the parameters η, α, and λ based on the observation of random response of the neural networks. One of the important aspects of the proposed schemes is its ability to escape from local minima with high possibility during the training period. The feasibility of proposed methods is shown through simulations on several problems.


1994 ◽  
Vol 7 (4) ◽  
pp. 661-670 ◽  
Author(s):  
Vicken Kasparian ◽  
Celal Batur ◽  
H. Zhang ◽  
J. Padovan

Sign in / Sign up

Export Citation Format

Share Document