scholarly journals Speeding Up Back-Propagation Neural Networks

10.28945/2931 ◽  
2005 ◽  
Author(s):  
Mohammed A. Otair ◽  
Walid A. Salameh

There are many successful applications of Backpropagation (BP) for training multilayer neural networks. However, it has many shortcomings. Learning often takes long time to converge, and it may fall into local minima. One of the possible remedies to escape from local minima is by using a very small learning rate, which slows down the learning process. The proposed algorithm presented in this study used for training depends on a multilayer neural network with a very small learning rate, especially when using a large training set size. It can be applied in a generic manner for any network size that uses a backpropgation algorithm through an optical time (seen time). The paper describes the proposed algorithm, and how it can improve the performance of back-propagation (BP). The feasibility of proposed algorithm is shown through out number of experiments on different network architectures.

10.28945/2932 ◽  
2005 ◽  
Author(s):  
Walid A. Salameh ◽  
Mohammed A. Otair

There are many successful applications of Backpropagation (BP) for training multilayer neural networks. However, they have many shortcomings. Learning often takes insupportable time to converge, and it may fall into local minima at all. One of the possible remedies to escape from local minima is using a very small learning rate, but this will slow the learning process. The proposed algorithm is presented for the training of multilayer neural networks with very small learning rate, especially when using large training set size. It can apply in a generic manner for any network size that uses a backpropgation algorithm through optical time. This paper studies the performance of the Optical Backpropagation algorithm OBP (Otair & Salameh, 2004a, 2004b. 2005) on training a neural network for online handwritten character recognition in comparison with backpropagation BP.


Author(s):  
Ergin Kilic ◽  
Melik Dolen

This study focuses on the slip prediction in a cable-drum system using artificial neural networks for the prospect of developing linear motion sensing scheme for such mechanisms. Both feed-forward and recurrent-type artificial neural network architectures are considered to capture the slip dynamics of cable-drum mechanisms. In the article, the network development is presented in a progressive (step-by-step) fashion for the purpose of not only making the design process transparent to the readers but also highlighting the corresponding challenges associated with the design phase (i.e. selection of architecture, network size, training process parameters, etc.). Prediction performances of the devised networks are evaluated rigorously via an experimental study. Finally, a structured neural network, which embodies the network with the best prediction performance, is further developed to overcome the drift observed at low velocity. The study illustrates that the resulting structured neural network could predict the slip in the mechanism within an error band of 100 µm when an absolute reference is utilized.


1995 ◽  
Vol 03 (04) ◽  
pp. 1177-1191 ◽  
Author(s):  
HÉLÈNE PAUGAM-MOISY

This article is a survey of recent advances on multilayer neural networks. The first section is a short summary on multilayer neural networks, their history, their architecture and their learning rule, the well-known back-propagation. In the following section, several theorems are cited, which present one-hidden-layer neural networks as universal approximators. The next section points out that two hidden layers are often required for exactly realizing d-dimensional dichotomies. Defining the frontier between one-hidden-layer and two-hidden-layer networks is still an open problem. Several bounds on the size of a multilayer network which learns from examples are presented and we enhance the fact that, even if all can be done with only one hidden layer, more often, things can be done better with two or more hidden layers. Finally, this assertion 'is supported by the behaviour of multilayer neural networks in two applications: prediction of pollution and odor recognition modelling.


1997 ◽  
Vol 08 (05n06) ◽  
pp. 509-515
Author(s):  
Yan Li ◽  
A. B. Rad

A new structure and training method for multilayer neural networks is presented. The proposed method is based on cascade training of subnetworks and optimizing weights layer by layer. The training procedure is completed in two steps. First, a subnetwork, m inputs and n outputs as the style of training samples, is trained using the training samples. Secondly the outputs of the subnetwork is taken as the inputs and the outputs of the training sample as the desired outputs, another subnetwork with n inputs and n outputs is trained. Finally the two trained subnetworks are connected and a trained multilayer neural networks is created. The numerical simulation results based on both linear least squares back-propagation (LSB) and traditional back-propagation (BP) algorithm have demonstrated the efficiency of the proposed method.


Author(s):  
Zakaria Noor Aldeen Mahmood Al Nuaimi ◽  
Rosni Abdullah

The Artificial Neural Networks Training (ANNT) process is an optimization problem of the weight set which has inspired researchers for a long time. By optimizing the training of the neural networks using optimal weight set, better results can be obtained by the neural networks. Traditional neural networks algorithms such as Back Propagation (BP) were used for ANNT, but they have some drawbacks such as computational complexity and getting trapped in the local minima. Therefore, evolutionary algorithms like the Swarm Intelligence (SI) algorithms have been employed in ANNT to overcome such issues. Artificial Bees Colony (ABC) optimization algorithm is one of the competitive algorithms in the SI algorithms group. However, hybrid algorithms are also a fundamental concern in the optimization field, which aim to cumulate the advantages of different algorithms into one algorithm. In this work, we aimed to highlight the performance of the Hybrid Particle-move Artificial Bee Colony (HPABC) algorithm by applying it on the ANNT application. The performance of the HPABC algorithm was investigated on four benchmark pattern-classification datasets and the results were compared with other algorithms. The results obtained illustrate that HPABC algorithm can efficiently be used for ANNT. HPABC outperformed the original ABC and PSO as well as other state-of-art and hybrid algorithms in terms of time, function evaluation number and recognition accuracy.  


1996 ◽  
Vol 8 (2) ◽  
pp. 451-460 ◽  
Author(s):  
Georg Thimm ◽  
Perry Moerland ◽  
Emile Fiesler

The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights. This simplifies the backpropagation learning rule by eliminating one of its parameters. The theorem can be extended to hold for some well-known variations on the backpropagation algorithm, such as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the nonstandard gain of optical sigmoids for optical neural networks.


1999 ◽  
Vol 09 (03) ◽  
pp. 251-256 ◽  
Author(s):  
L.C. PEDROZA ◽  
C.E. PEDREIRA

This paper proposes a new methodology to approximate functions by incorporating a priori information. The relationship between the proposed scheme and multilayer neural networks is explored theoretically and numerically. This approach is particularly interesting for the very relevant class of limited spectrum functions. The number of free parameters is smaller if compared to Back-Propagation Algorithm opening the way for better generalization results.


2012 ◽  
Vol 9 (4) ◽  
pp. 713-719
Author(s):  
Baghdad Science Journal

In this paper, we derive and prove the stability bounds of the momentum coefficient µ and the learning rate ? of the back propagation updating rule in Artificial Neural Networks .The theoretical upper bound of learning rate ? is derived and its practical approximation is obtained


Sign in / Sign up

Export Citation Format

Share Document