GENETIC OPTIMIZATION OF RADIAL BASIS PROBABILISTIC NEURAL NETWORKS

Author(s):  
WEN-BO ZHAO ◽  
DE-SHUANG HUANG ◽  
JI-YAN DU ◽  
LI-MING WANG

This paper discusses using genetic algorithms (GA) to optimize the structure of radial basis probabilistic neural networks (RBPNN), including how to select hidden centers of the first hidden layer and to determine the controlling parameter of Gaussian kernel functions. In the process of constructing the genetic algorithm, a novel encoding method is proposed for optimizing the RBPNN structure. This encoding method can not only make the selected hidden centers sufficiently reflect the key distribution characteristic in the space of training samples set and reduce the hidden centers number as few as possible, but also simultaneously determine the optimum controlling parameters of Gaussian kernel functions matching the selected hidden centers. Additionally, we also constructively propose a new fitness function so as to make the designed RBPNN as simple as possible in the network structure in the case of not losing the network performance. Finally, we take the two benchmark problems of discriminating two-spiral problem and classifying the iris data, for example, to test and evaluate this designed GA. The experimental results illustrate that our designed GA can significantly reduce the required hidden centers number, compared with the recursive orthogonal least square algorithm (ROLSA) and the modified K-means algorithm (MKA). In particular, by means of statistical experiments it was proved that the optimized RBPNN by our designed GA, have still a better generalization performance with respect to the ones by the ROLSA and the MKA, in spite of the network scale having been greatly reduced. Additionally, our experimental results also demonstrate that our designed GA is also suitable for optimizing the radial basis function neural networks (RBFNN).

2009 ◽  
Vol 18 (06) ◽  
pp. 853-881 ◽  
Author(s):  
TODOR GANCHEV

In the present contribution we propose an integral training procedure for the Locally Recurrent Probabilistic Neural Networks (LR PNNs). Specifically, the adjustment of the smoothing factor "sigma" in the pattern layer of the LR PNN and the training of the recurrent layer weights are integrated in an automatic process that iteratively estimates all adjustable parameters of the LR PNN from the available training data. Furthermore, in contrast to the original LR PNN, whose recurrent layer was trained to provide optimum separation among the classes on the training dataset, while striving to keep a balance between the learning rates for all classes, here the training strategy is oriented towards optimizing the overall classification accuracy, straightforwardly. More precisely, the new training strategy directly targets at maximizing the posterior probabilities for the target class and minimizing the posterior probabilities estimated for the non-target classes. The new fitness function requires fewer computations for each evaluation, and therefore the overall computational demands for training the recurrent layer weights are reduced. The performance of the integrated training procedure is illustrated on three different speech processing tasks: emotion recognition, speaker identification and speaker verification.


Author(s):  
DE-SHUANG HUANG

This paper investigates the capabilities of radial basis function networks (RBFN) and kernel neural networks (KNN), i.e. a specific probabilistic neural networks (PNN), and studies their similarities and differences. In order to avoid the huge amount of hidden units of the KNNs (or PNNs) and reduce the training time for the RBFNs, this paper proposes a new feedforward neural network model referred to as radial basis probabilistic neural network (RBPNN). This new network model inherits the merits of the two old odels to a great extent, and avoids their defects in some ways. Finally, we apply this new RBPNN to the recognition of one-dimensional cross-images of radar targets (five kinds of aircrafts), and the experimental results are given and discussed.


2018 ◽  
Vol 6 (1) ◽  
pp. 33-48 ◽  
Author(s):  
Sushil Kumar ◽  
Bipin Kumar Tripathi

Abstract The nonlinear spatial grouping process of synapses is one of the fascinating methodologies for neuro-computing researchers to achieve the computational power of a neuron. Generally, researchers use neuron models that are based on summation (linear), product (linear) or radial basis (nonlinear) aggregation for the processing of synapses, to construct multi-layered feed-forward neural networks, but all these neuron models and their corresponding neural networks have their advantages or disadvantages. The multi-layered network generally uses for accomplishing the global approximation of input–output mapping but sometimes getting stuck into local minima, while the nonlinear radial basis function (RBF) network is based on exponentially decaying that uses for local approximation to input–output mapping. Their advantages and disadvantages motivated to design two new artificial neuron models based on compensatory aggregation functions in the quaternionic domain. The net internal potentials of these neuron models are developed with the compositions of basic summation (linear) and radial basis (nonlinear) operations on quaternionic-valued input signals. The neuron models based on these aggregation functions ensure faster convergence, better training, and prediction accuracy. The learning and generalization capabilities of these neurons are verified through various three-dimensional transformations and time series predictions as benchmark problems. Highlights Two new CSU and CPU neuron models for quaternionic signals are proposed. Net potentials based on the compositions of summation and radial basis functions. The nonlinear grouping of synapses achieve the computational power of proposed neurons. The neuron models ensure faster convergence, better training and prediction accuracy. The learning and generalization capabilities of CSU/CPU are verified by various benchmark problems.


Sign in / Sign up

Export Citation Format

Share Document