Convergence analysis of a back-propagation algorithm with adaptive momentum

2011 ◽  
Vol 74 (5) ◽  
pp. 749-752 ◽  
Author(s):  
Hongmei Shao ◽  
Gaofeng Zheng
2016 ◽  
Vol 114 ◽  
pp. 79-87 ◽  
Author(s):  
Alaa Ali Hameed ◽  
Bekir Karlik ◽  
Mohammad Shukri Salman

2012 ◽  
Vol 09 ◽  
pp. 432-439 ◽  
Author(s):  
MUHAMMAD ZUBAIR REHMAN ◽  
NAZRI MOHD. NAWI

Despite being widely used in the practical problems around the world, Gradient Descent Back-propagation algorithm comes with problems like slow convergence and convergence to local minima. Previous researchers have suggested certain modifications to improve the convergence in gradient Descent Back-propagation algorithm such as careful selection of input weights and biases, learning rate, momentum, network topology, activation function and value for 'gain' in the activation function. This research proposed an algorithm for improving the working performance of back-propagation algorithm which is 'Gradient Descent with Adaptive Momentum (GDAM)' by keeping the gain value fixed during all network trials. The performance of GDAM is compared with 'Gradient Descent with fixed Momentum (GDM)' and 'Gradient Descent Method with Adaptive Gain (GDM-AG)'. The learning rate is fixed to 0.4 and maximum epochs are set to 3000 while sigmoid activation function is used for the experimentation. The results show that GDAM is a better approach than previous methods with an accuracy ratio of 1.0 for classification problems like Wine Quality, Mushroom and Thyroid disease.


2012 ◽  
Vol 09 ◽  
pp. 448-455 ◽  
Author(s):  
NORHAMREEZA ABDUL HAMID ◽  
NAZRI MOHD NAWI ◽  
ROZAIDA GHAZALI ◽  
MOHD NAJIB MOHD SALLEH

This paper presents a new method to improve back propagation algorithm from getting stuck with local minima problem and slow convergence speeds which caused by neuron saturation in the hidden layer. In this proposed algorithm, each training pattern has its own activation functions of neurons in the hidden layer that are adjusted by the adaptation of gain parameters together with adaptive momentum and learning rate value during the learning process. The efficiency of the proposed algorithm is compared with the conventional back propagation gradient descent and the current working back propagation gradient descent with adaptive gain by means of simulation on three benchmark problems namely iris, glass and thyroid.


Sign in / Sign up

Export Citation Format

Share Document