Efficient Adaptive Learning for Classification Tasks with Binary Units

1998 ◽  
Vol 10 (4) ◽  
pp. 1007-1030 ◽  
Author(s):  
J. Manuel Torres Moreno ◽  
Mirta B. Gordon

This article presents a new incremental learning algorithm for classification tasks, called Net Lines, which is well adapted for both binary and real-valued input patterns. It generates small, compact feedforward neural networks with one hidden layer of binary units and binary output units. A convergence theorem ensures that solutions with a finite number of hidden units exist for both binary and real-valued input patterns. An implementation for problems with more than two classes, valid for any binary classifier, is proposed. The generalization error and the size of the resulting networks are compared to the best published results on well-known classification benchmarks. Early stopping is shown to decrease overfitting, without improving the generalization performance.

2016 ◽  
Vol 28 (S1) ◽  
pp. 719-726 ◽  
Author(s):  
Dong-Mei Pu ◽  
Da-Qi Gao ◽  
Tong Ruan ◽  
Yu-Bo Yuan

2008 ◽  
Vol 18 (05) ◽  
pp. 433-441 ◽  
Author(s):  
HIEU TRUNG HUYNH ◽  
YONGGWAN WON ◽  
JUNG-JA KIM

Recently, a novel learning algorithm called extreme learning machine (ELM) was proposed for efficiently training single-hidden-layer feedforward neural networks (SLFNs). It was much faster than the traditional gradient-descent-based learning algorithms due to the analytical determination of output weights with the random choice of input weights and hidden layer biases. However, this algorithm often requires a large number of hidden units and thus slowly responds to new observations. Evolutionary extreme learning machine (E-ELM) was proposed to overcome this problem; it used the differential evolution algorithm to select the input weights and hidden layer biases. However, this algorithm required much time for searching optimal parameters with iterative processes and was not suitable for data sets with a large number of input features. In this paper, a new approach for training SLFNs is proposed, in which the input weights and biases of hidden units are determined based on a fast regularized least-squares scheme. Experimental results for many real applications with both small and large number of input features show that our proposed approach can achieve good generalization performance with much more compact networks and extremely high speed for both learning and testing.


2012 ◽  
Vol 3 (3) ◽  
pp. 179-188 ◽  
Author(s):  
Sevil Ahmed ◽  
Nikola Shakev ◽  
Andon Topalov ◽  
Kostadin Shiev ◽  
Okyay Kaynak

Sign in / Sign up

Export Citation Format

Share Document