Fast neural network algorithm for solving classification tasks: Batch error back-propagation algorithm

Author(s):  
Noor Albarakati ◽  
Vojislav Kecman
Author(s):  
Maria Sivak ◽  
◽  
Vladimir Timofeev ◽  

The paper considers the problem of building robust neural networks using different robust loss functions. Applying such neural networks is reasonably when working with noisy data, and it can serve as an alternative to data preprocessing and to making neural network architecture more complex. In order to work adequately, the error back-propagation algorithm requires a loss function to be continuously or two-times differentiable. According to this requirement, two five robust loss functions were chosen (Andrews, Welsch, Huber, Ramsey and Fair). Using the above-mentioned functions in the error back-propagation algorithm instead of the quadratic one allows obtaining an entirely new class of neural networks. For investigating the properties of the built networks a number of computational experiments were carried out. Different values of outliers’ fraction and various numbers of epochs were considered. The first step included adjusting the obtained neural networks, which lead to choosing such values of internal loss function parameters that resulted in achieving the highest accuracy of a neural network. To determine the ranges of parameter values, a preliminary study was pursued. The results of the first stage allowed giving recommendations on choosing the best parameter values for each of the loss functions under study. The second stage dealt with comparing the investigated robust networks with each other and with the classical one. The analysis of the results shows that using the robust technique leads to a significant increase in neural network accuracy and in a learning rate.


2020 ◽  
Vol 34 (15) ◽  
pp. 2050161
Author(s):  
Vipin Tiwari ◽  
Ashish Mishra

This paper designs a novel classification hardware framework based on neural network (NN). It utilizes COordinate Rotation DIgital Computer (CORDIC) algorithm to implement the activation function of NNs. The training was performed through software using an error back-propagation algorithm (EBPA) implemented in C++, then the final weights were loaded to the implemented hardware framework to perform classification. The hardware framework is developed in Xilinx 9.2i environment using VHDL as programming languages. Classification tests are performed on benchmark datasets obtained from UCI machine learning data repository. The results are compared with competitive classification approaches by considering the same datasets. Extensive analysis reveals that the proposed hardware framework provides more efficient results as compared to the existing classifiers.


Author(s):  
Michael Negnevitsky ◽  
◽  
Martin J. Ringrose

A fuzzy logic controller for updating training parameters in the error back-propagation algorithm is presented. The controller is based on heuristic rules for speeding up the convergence of training process, incorporating both learning rate and momentum constant changes.


Sign in / Sign up

Export Citation Format

Share Document