Comparison and evaluation of variants of the conjugate gradient method for efficient learning in feed-forward neural networks with backward error propagation

1992 ◽  
Vol 3 (1) ◽  
pp. 27-35 ◽  
Author(s):  
John A Kinsella
2019 ◽  
Vol 24 (1) ◽  
pp. 115
Author(s):  
Hind H. Mohammed

In this paper, we will present different type of CG algorithms depending on Peary conjugacy condition. The new conjugate gradient training (GDY) algorithm using to train MFNNs and prove it's descent property and global convergence for it and then we tested the behavior of this algorithm in the training of artificial neural networks and compared it with known algorithms in this field through two types of issues   http://dx.doi.org/10.25130/tjps.24.2019.020


1993 ◽  
Vol 22 (464) ◽  
Author(s):  
Martin F. Møller

<p>Since the discovery of the back-propagation method, many modified and new algorithms have been proposed for training of feed-forward neural networks. The problem with slow convergence rate has, however, not been solved when the training is on large scale problems. There is still a need for more efficient algorithms. This Ph.D. thesis describes different approaches to improve convergence. The main results of the thesis is the development of the Scaled Conjugate Gradient Algorithm and the stochastic version of this algorithm. Other important results are the development of methods that can derive and use Hessian information in an efficient way. The main part of this thesis is the 5 papers presented in appendices A-E. Chapters 1-6 give an overview of learning in feed-forward neural networks, put these papers in perspective and present the most important results. The conclusion of this thesis is:</p><p> </p><p>* Conjugate gradient algorithms are very suitable for training of feed-forward networks.</p><p>* Use of second order information by calculations on the Hessian matrix can be used to improve convergence.</p>


1991 ◽  
Vol 02 (04) ◽  
pp. 291-301 ◽  
Author(s):  
E.M. Johansson ◽  
F.U. Dowla ◽  
D.M. Goodman

In many applications, the number of interconnects or weights in a neural network is so large that the learning time for the conventional backpropagation algorithm can become excessively long. Numerical optimization theory offers a rich and robust set of techniques which can be applied to neural networks to improve learning rates. In particular, the conjugate gradient method is easily adapted to the backpropagation learning problem. This paper describes the conjugate gradient method, its application to the backpropagation learning problem and presents results of numerical tests which compare conventional backpropagation, steepest descent and the conjugate gradient methods. For the parity problem, we find that the conjugate gradient method is an order of magnitude faster than conventional backpropagation with momentum.


Sign in / Sign up

Export Citation Format

Share Document