scholarly journals AVLR-EBP: A Variable Step Size Approach to Speed-up the Convergence of Error Back-Propagation Algorithm

2011 ◽  
Vol 33 (2) ◽  
pp. 201-214 ◽  
Author(s):  
Arman Didandeh ◽  
Nima Mirbakhsh ◽  
Ali Amiri ◽  
Mahmood Fathy
Author(s):  
Michael Negnevitsky ◽  
◽  
Martin J. Ringrose

A fuzzy logic controller for updating training parameters in the error back-propagation algorithm is presented. The controller is based on heuristic rules for speeding up the convergence of training process, incorporating both learning rate and momentum constant changes.


Author(s):  
Maria Sivak ◽  
◽  
Vladimir Timofeev ◽  

The paper considers the problem of building robust neural networks using different robust loss functions. Applying such neural networks is reasonably when working with noisy data, and it can serve as an alternative to data preprocessing and to making neural network architecture more complex. In order to work adequately, the error back-propagation algorithm requires a loss function to be continuously or two-times differentiable. According to this requirement, two five robust loss functions were chosen (Andrews, Welsch, Huber, Ramsey and Fair). Using the above-mentioned functions in the error back-propagation algorithm instead of the quadratic one allows obtaining an entirely new class of neural networks. For investigating the properties of the built networks a number of computational experiments were carried out. Different values of outliers’ fraction and various numbers of epochs were considered. The first step included adjusting the obtained neural networks, which lead to choosing such values of internal loss function parameters that resulted in achieving the highest accuracy of a neural network. To determine the ranges of parameter values, a preliminary study was pursued. The results of the first stage allowed giving recommendations on choosing the best parameter values for each of the loss functions under study. The second stage dealt with comparing the investigated robust networks with each other and with the classical one. The analysis of the results shows that using the robust technique leads to a significant increase in neural network accuracy and in a learning rate.


Sign in / Sign up

Export Citation Format

Share Document