scholarly journals A Fast Incremental Learning Algorithm for Feed-forward Neural Networks Using Resilient Propagation

Author(s):  
Annie anak Joseph
2014 ◽  
Vol 1030-1032 ◽  
pp. 1627-1632
Author(s):  
Yun Jun Yu ◽  
Sui Peng ◽  
Zhi Chuan Wu ◽  
Peng Liang He

The problem of local minimum cannot be avoided when it comes to nonlinear optimization in the learning algorithm of neural network parameters, and the larger the optimization space is, the more obvious the problem becomes. This paper proposes a new type of hybrid learning algorithm for three-layered feed-forward neural networks. This algorithm is based on three-layered feed-forward neural networks with output layer function, namely linear function, combining a quasi Newton algorithm with adaptive decoupled step and momentum (QNADSM) and iterative least square method to export. Simulation proves that this hybrid algorithm has strong self-adaptive capability, small calculation amount and fast convergence speed. It is an effective engineer practical algorithm.


1991 ◽  
Vol 02 (04) ◽  
pp. 323-329 ◽  
Author(s):  
C.J. Pérez Vicente ◽  
J. Carrabina ◽  
E. Valderrama

We introduce a learning algorithm for feed-forward neural networks with synapses which can only take a discrete number of values. Taking into account the inherent limitations associated to these networks, we think that the performance of the method is quite efficient as we have shown through some simple results. The main novelty with respect to other discrete learning techniques is a different strategy in the search for solutions. Generalizations to any arbitrary distribution of discrete weights are straightforward.


Author(s):  
Pilar Bachiller ◽  
◽  
Julia González

Feed-forward neural networks have emerged as a good solution for many problems, such as classification, recognition and identification, and signal processing. However, the importance of selecting an adequate hidden structure for this neural model should not be underestimated. When the hidden structure of the network is too large and complex for the model being developed, the network may tend to memorize input and output sets rather than learning relationships between them. Such a network may train well but test poorly when inputs outside the training set are presented. In addition, training time will significantly increase when the network is unnecessarily large and complex. Most of the proposed solutions to this problem consist of training a larger than necessary network, pruning unnecessary links and nodes and retraining the reduced network. We propose a new method to optimize the size of a feed-forward neural network using orthogonal transformations. This approach prunes unnecessary nodes during the training process, avoiding the retraining phase of the reduced network, which is necessary in most pruning techniques.


2004 ◽  
Vol 4 (3) ◽  
pp. 3653-3667 ◽  
Author(s):  
D. J. Lary ◽  
H. Y. Mussa

Abstract. In this study a new extended Kalman filter (EKF) learning algorithm for feed-forward neural networks (FFN) is used. With the EKF approach, the training of the FFN can be seen as state estimation for a non-linear stationary process. The EKF method gives excellent convergence performances provided that there is enough computer core memory and that the machine precision is high. Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). The neural network was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9997. The neural network Fortran code used is available for download.


Author(s):  
J. M. Westall ◽  
M. S. Narasimha

Neural networks are now widely and successfully used in the recognition of handwritten numerals. Despite their wide use in recognition, neural networks have not seen widespread use in segmentation. Segmentation can be extremely difficult in the presence of connected numerals, fragmented numerals, and background noise, and its failure is a principal cause of rejected and incorrectly read documents. Therefore, strategies leading to the successful application of neural technologies to segmentation are likely to yield important performance benefits. In this paper we identify problems that have impeded the use of neural networks in segmentation and describe an evolutionary approach to applying neural networks in segmentation. Our approach, based upon the use of monotonic fuzzy valued decision functions computed by feed-forward neural networks, has been successfully employed in a production system.


Sign in / Sign up

Export Citation Format

Share Document