Efficient gradient descent method of RBF neural entworks with adaptive learning rate

2002 ◽  
Vol 19 (3) ◽  
pp. 255-258
Author(s):  
Jiayu Lin ◽  
Ying Liu
2018 ◽  
Author(s):  
Kazunori D Yamada

ABSTRACTIn the deep learning era, stochastic gradient descent is the most common method used for optimizing neural network parameters. Among the various mathematical optimization methods, the gradient descent method is the most naive. Adjustment of learning rate is necessary for quick convergence, which is normally done manually with gradient descent. Many optimizers have been developed to control the learning rate and increase convergence speed. Generally, these optimizers adjust the learning rate automatically in response to learning status. These optimizers were gradually improved by incorporating the effective aspects of earlier methods. In this study, we developed a new optimizer: YamAdam. Our optimizer is based on Adam, which utilizes the first and second moments of previous gradients. In addition to the moment estimation system, we incorporated an advantageous part of AdaDelta, namely a unit correction system, into YamAdam. According to benchmark tests on some common datasets, our optimizer showed similar or faster convergent performance compared to the existing methods. YamAdam is an option as an alternative optimizer for deep learning.


2012 ◽  
Vol 09 ◽  
pp. 432-439 ◽  
Author(s):  
MUHAMMAD ZUBAIR REHMAN ◽  
NAZRI MOHD. NAWI

Despite being widely used in the practical problems around the world, Gradient Descent Back-propagation algorithm comes with problems like slow convergence and convergence to local minima. Previous researchers have suggested certain modifications to improve the convergence in gradient Descent Back-propagation algorithm such as careful selection of input weights and biases, learning rate, momentum, network topology, activation function and value for 'gain' in the activation function. This research proposed an algorithm for improving the working performance of back-propagation algorithm which is 'Gradient Descent with Adaptive Momentum (GDAM)' by keeping the gain value fixed during all network trials. The performance of GDAM is compared with 'Gradient Descent with fixed Momentum (GDM)' and 'Gradient Descent Method with Adaptive Gain (GDM-AG)'. The learning rate is fixed to 0.4 and maximum epochs are set to 3000 while sigmoid activation function is used for the experimentation. The results show that GDAM is a better approach than previous methods with an accuracy ratio of 1.0 for classification problems like Wine Quality, Mushroom and Thyroid disease.


Author(s):  
Afan Galih Salman ◽  
Yen Lina Prasetio

The artificial neural network (ANN) technology in rainfall prediction can be done using the learning approach. The ANN prediction accuracy is measured by the determination coefficient (R2) and root mean square error (RMSE). This research implements Elman’s Recurrent ANN which is heuristically optimized based on el-nino southern oscilation (ENSO) variables: wind, southern oscillation index (SOI), sea surface temperatur (SST) dan outgoing long wave radiation (OLR) to forecast regional monthly rainfall in Bongan Bali. The heuristic learning optimization done is basically a performance development of standard gradient descent learning algorithm into training algorithms: gradient descent momentum and adaptive learning rate. The patterns of input data affect the performance of Recurrent Elman neural network in estimation process. The first data group that is 75% training data and 25% testing data produce the maximum R2 leap 74,6% while the second data group that is 50% training data and 50% testing data produce the maximum R2 leap 49,8%.


Sign in / Sign up

Export Citation Format

Share Document