Beyond back-propagation learning for diabetic detection: Convergence comparison of gradient descent, momentum and Adaptive Learning Rate

Author(s):  
Sukmawati N. Endah ◽  
Aris P. Widodo ◽  
Muhammad L. Fariq ◽  
Shavira I. Nadianada ◽  
Fadil Maulana
Author(s):  
Nazri Mohd Nawi ◽  
Faridah Hamzah ◽  
Norhamreeza Abdul Hamid ◽  
Muhammad Zubair Rehman ◽  
Mohammad Aamir ◽  
...  

2012 ◽  
Vol 09 ◽  
pp. 448-455 ◽  
Author(s):  
NORHAMREEZA ABDUL HAMID ◽  
NAZRI MOHD NAWI ◽  
ROZAIDA GHAZALI ◽  
MOHD NAJIB MOHD SALLEH

This paper presents a new method to improve back propagation algorithm from getting stuck with local minima problem and slow convergence speeds which caused by neuron saturation in the hidden layer. In this proposed algorithm, each training pattern has its own activation functions of neurons in the hidden layer that are adjusted by the adaptation of gain parameters together with adaptive momentum and learning rate value during the learning process. The efficiency of the proposed algorithm is compared with the conventional back propagation gradient descent and the current working back propagation gradient descent with adaptive gain by means of simulation on three benchmark problems namely iris, glass and thyroid.


Author(s):  
Afan Galih Salman ◽  
Yen Lina Prasetio

The artificial neural network (ANN) technology in rainfall prediction can be done using the learning approach. The ANN prediction accuracy is measured by the determination coefficient (R2) and root mean square error (RMSE). This research implements Elman’s Recurrent ANN which is heuristically optimized based on el-nino southern oscilation (ENSO) variables: wind, southern oscillation index (SOI), sea surface temperatur (SST) dan outgoing long wave radiation (OLR) to forecast regional monthly rainfall in Bongan Bali. The heuristic learning optimization done is basically a performance development of standard gradient descent learning algorithm into training algorithms: gradient descent momentum and adaptive learning rate. The patterns of input data affect the performance of Recurrent Elman neural network in estimation process. The first data group that is 75% training data and 25% testing data produce the maximum R2 leap 74,6% while the second data group that is 50% training data and 50% testing data produce the maximum R2 leap 49,8%.


Sign in / Sign up

Export Citation Format

Share Document