Exploiting the functional training approach in Radial Basis Function networks

Author(s):  
Cristiano L. Cabrita ◽  
Antonio E Ruano ◽  
Pedro M. Ferreira
2005 ◽  
Vol 293-294 ◽  
pp. 135-142
Author(s):  
Graeme Manson ◽  
Gareth Pierce ◽  
Keith Worden ◽  
Daley Chetwynd

This paper considers the performance of radial basis function neural networks for the purpose of data classification. The methods are illustrated using a simple two class problem. Two techniques for reducing the rate of misclassifications, via the introduction of an “unable to classify” label, are presented. The first of these considers the imposition of a threshold value on the classifier outputs whilst the second considers the replacement of the crisp network weights with interval ranges. Two network training techniques are investigated and it is found that, although thresholding and uncertain weights give similar results, the level of variability of network performance is dependent upon the training approach


1991 ◽  
Vol 3 (2) ◽  
pp. 246-257 ◽  
Author(s):  
J. Park ◽  
I. W. Sandberg

There have been several recent studies concerning feedforward networks and the problem of approximating arbitrary functionals of a finite number of real variables. Some of these studies deal with cases in which the hidden-layer nonlinearity is not a sigmoid. This was motivated by successful applications of feedforward networks with nonsigmoidal hidden-layer units. This paper reports on a related study of radial-basis-function (RBF) networks, and it is proved that RBF networks having one hidden layer are capable of universal approximation. Here the emphasis is on the case of typical RBF networks, and the results show that a certain class of RBF networks with the same smoothing factor in each kernel node is broad enough for universal approximation.


Sign in / Sign up

Export Citation Format

Share Document