scholarly journals A New Learning Algorithm for Single hidden Layer Feedforward Neural Networks

2011 ◽  
Vol 28 (6) ◽  
pp. 26-33 ◽  
Author(s):  
Virendra P. Vishwakarma ◽  
M. N. Gupta
2002 ◽  
Vol 12 (01) ◽  
pp. 45-67 ◽  
Author(s):  
M. R. MEYBODI ◽  
H. BEIGY

One popular learning algorithm for feedforward neural networks is the backpropagation (BP) algorithm which includes parameters, learning rate (η), momentum factor (α) and steepness parameter (λ). The appropriate selections of these parameters have large effects on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase speed of convergence. In this paper, we shall present several classes of learning automata based solutions to the problem of adaptation of BP algorithm parameters. By interconnection of learning automata to the feedforward neural networks, we use learning automata scheme for adjusting the parameters η, α, and λ based on the observation of random response of the neural networks. One of the important aspects of the proposed schemes is its ability to escape from local minima with high possibility during the training period. The feasibility of proposed methods is shown through simulations on several problems.


2016 ◽  
Vol 28 (S1) ◽  
pp. 719-726 ◽  
Author(s):  
Dong-Mei Pu ◽  
Da-Qi Gao ◽  
Tong Ruan ◽  
Yu-Bo Yuan

2012 ◽  
Vol 241-244 ◽  
pp. 1762-1767 ◽  
Author(s):  
Ya Juan Tian ◽  
Hua Xian Pan ◽  
Xuan Chao Liu ◽  
Guo Jian Cheng

To overcome the problem of lower training speed and difficulty parameter selection in traditional support vector machine (SVM), a method based on extreme learning machine (ELM) for lithofacies recognition is presented in this paper. ELM is a new learning algorithm with single-hidden layer feedforward neural networks (SLFNN). Not only it can simplify the parameter selection process, but also improve the training speed of the network learning. By determining the optimal parameters, the lithofacies classification model is established, and the classification result of ELM is also compared to traditional SVM. The experimental results show that, ELM with less number of neurons has similar classification accuracy compared to SVM, and it is easier to select the parameters which significantly reduce the training speed. The feasibility of ELM for lithofacies recognition and the availability of the algorithm are verified and validated


2008 ◽  
Vol 18 (05) ◽  
pp. 433-441 ◽  
Author(s):  
HIEU TRUNG HUYNH ◽  
YONGGWAN WON ◽  
JUNG-JA KIM

Recently, a novel learning algorithm called extreme learning machine (ELM) was proposed for efficiently training single-hidden-layer feedforward neural networks (SLFNs). It was much faster than the traditional gradient-descent-based learning algorithms due to the analytical determination of output weights with the random choice of input weights and hidden layer biases. However, this algorithm often requires a large number of hidden units and thus slowly responds to new observations. Evolutionary extreme learning machine (E-ELM) was proposed to overcome this problem; it used the differential evolution algorithm to select the input weights and hidden layer biases. However, this algorithm required much time for searching optimal parameters with iterative processes and was not suitable for data sets with a large number of input features. In this paper, a new approach for training SLFNs is proposed, in which the input weights and biases of hidden units are determined based on a fast regularized least-squares scheme. Experimental results for many real applications with both small and large number of input features show that our proposed approach can achieve good generalization performance with much more compact networks and extremely high speed for both learning and testing.


1998 ◽  
Vol 10 (4) ◽  
pp. 1007-1030 ◽  
Author(s):  
J. Manuel Torres Moreno ◽  
Mirta B. Gordon

This article presents a new incremental learning algorithm for classification tasks, called Net Lines, which is well adapted for both binary and real-valued input patterns. It generates small, compact feedforward neural networks with one hidden layer of binary units and binary output units. A convergence theorem ensures that solutions with a finite number of hidden units exist for both binary and real-valued input patterns. An implementation for problems with more than two classes, valid for any binary classifier, is proposed. The generalization error and the size of the resulting networks are compared to the best published results on well-known classification benchmarks. Early stopping is shown to decrease overfitting, without improving the generalization performance.


Sign in / Sign up

Export Citation Format

Share Document