A taxonomy and comparison of data smoothing and function approximation methods

1981 ◽  
Vol 14 (7) ◽  
pp. 500
Author(s):  
Lebert R. Alley ◽  
James L. Smith
2006 ◽  
Vol 16 (04) ◽  
pp. 283-293 ◽  
Author(s):  
PEI-YI HAO ◽  
JUNG-HSIEN CHIANG

This paper presents the pruning and model-selecting algorithms to the support vector learning for sample classification and function regression. When constructing RBF network by support vector learning we occasionally obtain redundant support vectors which do not significantly affect the final classification and function approximation results. The pruning algorithms primarily based on the sensitivity measure and the penalty term. The kernel function parameters and the position of each support vector are updated in order to have minimal increase in error, and this makes the structure of SVM network more flexible. We illustrate this approach with synthetic data simulation and face detection problem in order to demonstrate the pruning effectiveness.


1995 ◽  
Vol 7 (2) ◽  
pp. 338-348 ◽  
Author(s):  
G. Deco ◽  
D. Obradovic

This paper presents a new learning paradigm that consists of a Hebbian and anti-Hebbian learning. A layer of radial basis functions is adapted in an unsupervised fashion by minimizing a two-element cost function. The first element maximizes the output of each gaussian neuron and it can be seen as an implementation of the traditional Hebbian learning law. The second element of the cost function reinforces the competitive learning by penalizing the correlation between the nodes. Consequently, the second term has an “anti-Hebbian” effect that is learned by the gaussian neurons without the implementation of lateral inhibition synapses. Therefore, the decorrelated Hebbian learning (DHL) performs clustering in the input space avoiding the “nonbiological” winner-take-all rule. In addition to the standard clustering problem, this paper also presents an application of the DHL in function approximation. A scaled piece-wise linear approximation of a function is obtained in the supervised fashion within the local regions of its domain determined by the DHL. For comparison, a standard single hidden-layer gaussian network is optimized with the initial centers corresponding to the DHL. The efficiency of the algorithm is demonstrated on the chaotic Mackey-Glass time series.


Sign in / Sign up

Export Citation Format

Share Document