scholarly journals An Overview of Sequential Learning Algorithms for Single Hidden Layer Networks: Current Issues & Future Trends

Author(s):  
Muhammad Haseeb Arshad ◽  
M. A. Abido

This paper serves as an overview for sequential learning algorithms for single hidden layer neural nets. Cite as: M. H. Arshad, M. A. Abido. An Overview of Sequential Learning Algorithms for Single Hidden Layer Networks: Current Issues & Future Trends. Abstract: In this paper, a brief survey of the commonly used sequential-learning algorithms used with single hidden layer feed-forward neural networks is presented. A glimpse at the different kinds that are available in the literature up until now, how they have developed throughout the years, and their relative execution is summarized. Most important things to take note of during the designing phase of neural networks are its complexity, computational efficiency, maximum training time, and ability to generalize the under-study problem. The comparison of different sequential learning algorithms in regard to these merits for single hidden layer neural networks is drawn.

2020 ◽  
Author(s):  
Muhammad Haseeb Arshad ◽  
M. A. Abido

This paper serves as an overview for sequential learning algorithms for single hidden layer neural nets. Cite as: M. H. Arshad, M. A. Abido. An Overview of Sequential Learning Algorithms for Single Hidden Layer Networks: Current Issues & Future Trends. Abstract: In this paper, a brief survey of the commonly used sequential-learning algorithms used with single hidden layer feed-forward neural networks is presented. A glimpse at the different kinds that are available in the literature up until now, how they have developed throughout the years, and their relative execution is summarized. Most important things to take note of during the designing phase of neural networks are its complexity, computational efficiency, maximum training time, and ability to generalize the under-study problem. The comparison of different sequential learning algorithms in regard to these merits for single hidden layer neural networks is drawn.


Author(s):  
Pilar Bachiller ◽  
◽  
Julia González

Feed-forward neural networks have emerged as a good solution for many problems, such as classification, recognition and identification, and signal processing. However, the importance of selecting an adequate hidden structure for this neural model should not be underestimated. When the hidden structure of the network is too large and complex for the model being developed, the network may tend to memorize input and output sets rather than learning relationships between them. Such a network may train well but test poorly when inputs outside the training set are presented. In addition, training time will significantly increase when the network is unnecessarily large and complex. Most of the proposed solutions to this problem consist of training a larger than necessary network, pruning unnecessary links and nodes and retraining the reduced network. We propose a new method to optimize the size of a feed-forward neural network using orthogonal transformations. This approach prunes unnecessary nodes during the training process, avoiding the retraining phase of the reduced network, which is necessary in most pruning techniques.


Sign in / Sign up

Export Citation Format

Share Document