APPLICATION OF GENERALIZED RADIAL BASIS FUNCTION NETWORKS TO RECOGNITION OF RADAR TARGETS

Author(s):  
DE-SHUANG HUANG

This paper extends general radial basis function networks (RBFN) with Gaussian kernel functions to generalized radial basis function networks (GRBFN) with Parzen window functions, and discusses applying the GRBFNs to recognition of radar targets. The equivalence between the RBFN classifiers (RBFNC) with outer-supervised signals of 0 or 1 and the estimate of Parzen windowed probabilistic density is proved. It is pointed out that the I/O functions of the hidden units in the RBFNC can be extended to general Parzen window functions (or called as potential functions). We present using recursive least square-backpropagation (RLS–BP) learning algorithm to train the GRBFNCs to classify five types of radar targets by means of their one-dimensional cross profiles. The concepts about the rate of recognition and confidence in the process of testing classification performance of the GRBFNCs are introduced. Six generalized kernel functions such as Gaussian, Double-Exponential, Triangle, Hyperbolic, Sinc and Cauchy, are used as the hidden I/O functions of the RBFNCs, and the classification performance of corresponding GRBFNCs for classifying one-dimensional cross profiles of radar targets is discussed.

2020 ◽  
Vol 1471 ◽  
pp. 012043
Author(s):  
Yessi Jusman ◽  
Zul Indra ◽  
Roni Salambue ◽  
Siti Nurul Aqmariah Mohd Kanafiah ◽  
Muhammad Ahdan Fawwaz Nurkholid

1991 ◽  
Vol 3 (2) ◽  
pp. 246-257 ◽  
Author(s):  
J. Park ◽  
I. W. Sandberg

There have been several recent studies concerning feedforward networks and the problem of approximating arbitrary functionals of a finite number of real variables. Some of these studies deal with cases in which the hidden-layer nonlinearity is not a sigmoid. This was motivated by successful applications of feedforward networks with nonsigmoidal hidden-layer units. This paper reports on a related study of radial-basis-function (RBF) networks, and it is proved that RBF networks having one hidden layer are capable of universal approximation. Here the emphasis is on the case of typical RBF networks, and the results show that a certain class of RBF networks with the same smoothing factor in each kernel node is broad enough for universal approximation.


Sign in / Sign up

Export Citation Format

Share Document