The Effect of Adaptive Momentum in Improving the Accuracy of Gradient Descent Back Propagation Algorithm on Classification Problems

Author(s):  
M. Z. Rehman ◽  
N. M. Nawi
2012 ◽  
Vol 09 ◽  
pp. 432-439 ◽  
Author(s):  
MUHAMMAD ZUBAIR REHMAN ◽  
NAZRI MOHD. NAWI

Despite being widely used in the practical problems around the world, Gradient Descent Back-propagation algorithm comes with problems like slow convergence and convergence to local minima. Previous researchers have suggested certain modifications to improve the convergence in gradient Descent Back-propagation algorithm such as careful selection of input weights and biases, learning rate, momentum, network topology, activation function and value for 'gain' in the activation function. This research proposed an algorithm for improving the working performance of back-propagation algorithm which is 'Gradient Descent with Adaptive Momentum (GDAM)' by keeping the gain value fixed during all network trials. The performance of GDAM is compared with 'Gradient Descent with fixed Momentum (GDM)' and 'Gradient Descent Method with Adaptive Gain (GDM-AG)'. The learning rate is fixed to 0.4 and maximum epochs are set to 3000 while sigmoid activation function is used for the experimentation. The results show that GDAM is a better approach than previous methods with an accuracy ratio of 1.0 for classification problems like Wine Quality, Mushroom and Thyroid disease.


2012 ◽  
Vol 09 ◽  
pp. 448-455 ◽  
Author(s):  
NORHAMREEZA ABDUL HAMID ◽  
NAZRI MOHD NAWI ◽  
ROZAIDA GHAZALI ◽  
MOHD NAJIB MOHD SALLEH

This paper presents a new method to improve back propagation algorithm from getting stuck with local minima problem and slow convergence speeds which caused by neuron saturation in the hidden layer. In this proposed algorithm, each training pattern has its own activation functions of neurons in the hidden layer that are adjusted by the adaptation of gain parameters together with adaptive momentum and learning rate value during the learning process. The efficiency of the proposed algorithm is compared with the conventional back propagation gradient descent and the current working back propagation gradient descent with adaptive gain by means of simulation on three benchmark problems namely iris, glass and thyroid.


2016 ◽  
Vol 114 ◽  
pp. 79-87 ◽  
Author(s):  
Alaa Ali Hameed ◽  
Bekir Karlik ◽  
Mohammad Shukri Salman

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2704
Author(s):  
Yunhan Lin ◽  
Wenlong Ji ◽  
Haowei He ◽  
Yaojie Chen

In this paper, an intelligent water shooting robot system for situations of carrier shake and target movement is designed, which uses a 2 DOF (degree of freedom) robot as an actuator, a photoelectric camera to detect and track the desired target, and a gyroscope to keep the robot’s body stable when it is mounted on the motion carriers. Particularly, for the accurate shooting of the designed system, an online tuning model of the water jet landing point based on the back-propagation algorithm was proposed. The model has two stages. In the first stage, the polyfit function of Matlab is used to fit a model that satisfies the law of jet motion in ideal conditions without interference. In the second stage, the model uses the back-propagation algorithm to update the parameters online according to the visual feedback of the landing point position. The model established by this method can dynamically eliminate the interference of external factors and realize precise on-target shooting. The simulation results show that the model can dynamically adjust the parameters according to the state relationship between the landing point and the desired target, which keeps the predicted pitch angle error within 0.1°. In the test on the actual platform, when the landing point is 0.5 m away from the position of the desired target, the model only needs 0.3 s to adjust the water jet to hit the target. Compared to the state-of-the-art method, GA-BP (genetic algorithm-back-propagation), the proposed method’s predicted pitch angle error is within 0.1 degree with 1/4 model parameters, while costing 1/7 forward propagation time and 1/200 back-propagation calculation time.


Sign in / Sign up

Export Citation Format

Share Document