Optimizing and Learning Algorithm for Feed-forward Neural Networks

Author(s):  
Pilar Bachiller ◽  
◽  
Julia González

Feed-forward neural networks have emerged as a good solution for many problems, such as classification, recognition and identification, and signal processing. However, the importance of selecting an adequate hidden structure for this neural model should not be underestimated. When the hidden structure of the network is too large and complex for the model being developed, the network may tend to memorize input and output sets rather than learning relationships between them. Such a network may train well but test poorly when inputs outside the training set are presented. In addition, training time will significantly increase when the network is unnecessarily large and complex. Most of the proposed solutions to this problem consist of training a larger than necessary network, pruning unnecessary links and nodes and retraining the reduced network. We propose a new method to optimize the size of a feed-forward neural network using orthogonal transformations. This approach prunes unnecessary nodes during the training process, avoiding the retraining phase of the reduced network, which is necessary in most pruning techniques.

Author(s):  
Tshilidzi Marwala

In this chapter, a classifier technique that is based on a missing data estimation framework that uses autoassociative multi-layer perceptron neural networks and genetic algorithms is proposed. The proposed method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey and compared to conventional feed-forward neural networks. The missing data approach based on the autoassociative network model proposed gives an accuracy of 92%, when compared to the accuracy of 84% obtained from the conventional feed-forward neural network models. The area under the receiver operating characteristics curve for the proposed autoassociative network model is 0.86 compared to 0.80 for the conventional feed-forward neural network model. The autoassociative network model proposed in this chapter, therefore, outperforms the conventional feed-forward neural network models and is an improved classifier. The reasons for this are: (1) the propagation of errors in the autoassociative network model is more distributed while for a conventional feed-forward network is more concentrated; and (2) there is no causality between the demographic properties and the HIV and, therefore, the HIV status does change the demographic properties and vice versa. Therefore, it is better to treat the problem as a missing data problem rather than a feed-forward problem.


2014 ◽  
Vol 1030-1032 ◽  
pp. 1627-1632
Author(s):  
Yun Jun Yu ◽  
Sui Peng ◽  
Zhi Chuan Wu ◽  
Peng Liang He

The problem of local minimum cannot be avoided when it comes to nonlinear optimization in the learning algorithm of neural network parameters, and the larger the optimization space is, the more obvious the problem becomes. This paper proposes a new type of hybrid learning algorithm for three-layered feed-forward neural networks. This algorithm is based on three-layered feed-forward neural networks with output layer function, namely linear function, combining a quasi Newton algorithm with adaptive decoupled step and momentum (QNADSM) and iterative least square method to export. Simulation proves that this hybrid algorithm has strong self-adaptive capability, small calculation amount and fast convergence speed. It is an effective engineer practical algorithm.


2020 ◽  
Author(s):  
Muhammad Haseeb Arshad ◽  
M. A. Abido

This paper serves as an overview for sequential learning algorithms for single hidden layer neural nets. Cite as: M. H. Arshad, M. A. Abido. An Overview of Sequential Learning Algorithms for Single Hidden Layer Networks: Current Issues & Future Trends. Abstract: In this paper, a brief survey of the commonly used sequential-learning algorithms used with single hidden layer feed-forward neural networks is presented. A glimpse at the different kinds that are available in the literature up until now, how they have developed throughout the years, and their relative execution is summarized. Most important things to take note of during the designing phase of neural networks are its complexity, computational efficiency, maximum training time, and ability to generalize the under-study problem. The comparison of different sequential learning algorithms in regard to these merits for single hidden layer neural networks is drawn.


Mathematics ◽  
2019 ◽  
Vol 7 (3) ◽  
pp. 262 ◽  
Author(s):  
Beong Yun

It is well known that feed-forward neural networks can be used for approximation to functions based on an appropriate activation function. In this paper, employing a new sigmoidal function with a parameter for an activation function, we consider a constructive feed-forward neural network approximation on a closed interval. The developed approximation method takes a simple form of a superposition of the parametric sigmoidal function. It is shown that the proposed method is very effective in approximation of discontinuous functions as well as continuous ones. For some examples, the availability of the presented method is demonstrated by comparing its numerical results with those of an existing neural network approximation method. Furthermore, the efficiency of the method in extended application to the multivariate function is also illustrated.


1991 ◽  
Vol 02 (04) ◽  
pp. 323-329 ◽  
Author(s):  
C.J. Pérez Vicente ◽  
J. Carrabina ◽  
E. Valderrama

We introduce a learning algorithm for feed-forward neural networks with synapses which can only take a discrete number of values. Taking into account the inherent limitations associated to these networks, we think that the performance of the method is quite efficient as we have shown through some simple results. The main novelty with respect to other discrete learning techniques is a different strategy in the search for solutions. Generalizations to any arbitrary distribution of discrete weights are straightforward.


Author(s):  
Polad Geidarov

Introduction: Metric recognition methods make it possible to preliminarily and strictly determine the structures of feed-forward neural networks, namely, the number of neurons, layers, and connections based on the initial parameters of the recognition problem. They also make it possible to analytically calculate the synapse weights of network neurons based on metric expressions. The setup procedure for these networks includes a sequential analytical calculation of the values of each synapse weight in the weight table for neurons of the zero or first layer, which allows us to obtain a working feed-forward neural network at the initial stage without the use of training algorithms. Then feed-forward neural networks can be trained by well-known learning algorithms, which generally speeds up the process of their creation and training. Purpose: To determine how much time the process of calculating the values of weights requires and, accordingly, how reasonable it is to preliminarily calculate the weights of a feed-forward neural network. Results: An algorithm is proposed and implemented for the automated calculation of all values of synapse weight tables for the zero and first layers as applied to the task of recognizing black-and-white monochrome symbol images. The proposed algorithm is described in the Builder C++ software environment. The possibility of optimizing the process of calculating the weights of synapses in order to accelerate the entire algorithm is considered. The time spent on calculating these weights for different configurations of neural networks based on metric recognition methods is estimated. Examples of creating and calculating synapse weight tables according to the considered algorithm are given. According to them, the analytical calculation of the weights of a neural network takes just seconds or minutes, being in no way comparable to the time necessary for training a neural network. Practical relevance: Analytical calculation of the weights of a neural network can significantly accelerate the process of creating and training a feed-forward neural network. Based on the proposed algorithm, we can implement one for calculating three-dimensional weight tables for more complex images, either blackand-white or color grayscale ones.


Author(s):  
Polad Geidarov

Introduction: Metric recognition methods make it possible to preliminarily and strictly determine the structures of feed-forward neural networks, namely, the number of neurons, layers, and connections based on the initial parameters of the recognition problem. They also make it possible to analytically calculate the synapse weights of network neurons based on metric expressions. The setup procedure for these networks includes a sequential analytical calculation of the values of each synapse weight in the weight table for neurons of the zero or first layer, which allows us to obtain a working feed-forward neural network at the initial stage without the use of training algorithms. Then feed-forward neural networks can be trained by well-known learning algorithms, which generally speeds up the process of their creation and training. Purpose: To determine how much time the process of calculating the values of weights requires and, accordingly, how reasonable it is to preliminarily calculate the weights of a feed-forward neural network. Results: An algorithm is proposed and implemented for the automated calculation of all values of synapse weight tables for the zero and first layers as applied to the task of recognizing black-and-white monochrome symbol images. The proposed algorithm is described in the Builder C++ software environment. The possibility of optimizing the process of calculating the weights of synapses in order to accelerate the entire algorithm is considered. The time spent on calculating these weights for different configurations of neural networks based on metric recognition methods is estimated. Examples of creating and calculating synapse weight tables according to the considered algorithm are given. According to them, the analytical calculation of the weights of a neural network takes just seconds or minutes, being in no way comparable to the time necessary for training a neural network. Practical relevance: Analytical calculation of the weights of a neural network can significantly accelerate the process of creating and training a feed-forward neural network. Based on the proposed algorithm, we can implement one for calculating three-dimensional weight tables for more complex images, either black and-white or color grayscale ones.


1997 ◽  
Vol 08 (03) ◽  
pp. 263-277
Author(s):  
Jim Torresen

One of the problems concerning the backpropagation training of feed-forward neural networks is the effect of the weight update frequency. This aspect influences the efficiency of parallel implementations of the training algorithm where the training vectors are distributed among processors. In this paper the convergence of two applications for various weight update intervals is reported. Further, several models are proposed for describing convergence and learning rate aspects in the context of a set of weight update intervals. The results show that the convergence by updating the weights after each training vector leads to about 10 times less number of training iterations compared to updating the weights only ones for the whole training set.


Sign in / Sign up

Export Citation Format

Share Document