Fuzzily modular single-layer RBF neural networks for solving large-scale classification problems

Author(s):  
Gao Daqi ◽  
Tong Zhen
2019 ◽  
Vol 41 (13) ◽  
pp. 3612-3625 ◽  
Author(s):  
Wang Qian ◽  
Wang Qiangde ◽  
Wei Chunling ◽  
Zhang Zhengqiang

The paper solves the problem of a decentralized adaptive state-feedback neural tracking control for a class of stochastic nonlinear high-order interconnected systems. Under the assumptions that the inverse dynamics of the subsystems are stochastic input-to-state stable (SISS) and for the controller design, Radial basis function (RBF) neural networks (NN) are used to cope with the packaged unknown system dynamics and stochastic uncertainties. Besides, the appropriate Lyapunov-Krosovskii functions and parameters are constructed for a class of large-scale high-order stochastic nonlinear strong interconnected systems with inverse dynamics. It has been proved that the actual controller can be designed so as to guarantee that all the signals in the closed-loop systems remain semi-globally uniformly ultimately bounded, and the tracking errors eventually converge in the small neighborhood of origin. Simulation example has been proposed to show the effectiveness of our results.


2018 ◽  
Vol 77 ◽  
pp. 187-194 ◽  
Author(s):  
Emre Cimen ◽  
Gurkan Ozturk ◽  
Omer Nezih Gerek

1991 ◽  
Vol 3 (1) ◽  
pp. 135-143 ◽  
Author(s):  
Hyuek-Jae Lee ◽  
Soo-Young Lee ◽  
Sang-Yung Shin ◽  
Bo-Yun Koh

TAG (Training by Adaptive Gain) is a new adaptive learning algorithm developed for optical implementation of large-scale artificial neural networks. For fully interconnected single-layer neural networks with N input and M output neurons TAG contains two different types of interconnections, i.e., M N global fixed interconnections and N + M adaptive gain controls. For two-dimensional input patterns the former may be achieved by multifacet holograms, and the latter by spatial light modulators (SLMs). For the same number of input and output neurons TAG requires much less adaptive elements, and provides a possibility for large-scale optical implementation at some sacrifice in performance as compared to the perceptron. The training algorithm is based on gradient descent and error backpropagation, and is easily extensible to multilayer architecture. Computer simulation demonstrates reasonable performance of TAG compared to perceptron performance. An electrooptical implementation of TAG is also proposed.


Author(s):  
DAVID A. ELIZONDO ◽  
ROBERT MORRIS ◽  
TIM WATSON ◽  
BENJAMIN N. PASSOW

The recursive deterministic perceptron (RDP) is a generalization of the single layer perceptron neural network. This neural network can separate, in a deterministic manner, any classification problem (linearly separable or not). It relies on the principle that in any nonlinearly separable (NLS) two-class classification problem, a linearly separable (LS) subset of one or more points belonging to one of the two classes can always be found. Small network topologies can be obtained when the LS subsets are of maximum cardinality. This is referred to as the problem of maximum separability and has been proven to be NP-Complete. Evolutionary computing techniques are applied to handle this problem in a more efficient way than the standard approaches in terms of complexity. These techniques enhance the RDP training in terms of speed of conversion and level of generalization. They provide an alternative to tackle large classification problems which is otherwise not feasible with the algorithmic versions of the RDP training methods.


2017 ◽  
Vol 29 (3) ◽  
pp. 861-866 ◽  
Author(s):  
Nolan Conaway ◽  
Kenneth J. Kurtz

Since the work of Minsky and Papert ( 1969 ), it has been understood that single-layer neural networks cannot solve nonlinearly separable classifications (i.e., XOR). We describe and test a novel divergent autoassociative architecture capable of solving nonlinearly separable classifications with a single layer of weights. The proposed network consists of class-specific linear autoassociators. The power of the model comes from treating classification problems as within-class feature prediction rather than directly optimizing a discriminant function. We show unprecedented learning capabilities for a simple, single-layer network (i.e., solving XOR) and demonstrate that the famous limitation in acquiring nonlinearly separable problems is not just about the need for a hidden layer; it is about the choice between directly predicting classes or learning to classify indirectly by predicting features.


Electronics ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 792
Author(s):  
Dongbao Jia ◽  
Yuka Fujishita ◽  
Cunhua Li ◽  
Yuki Todo ◽  
Hongwei Dai

With the characteristics of simple structure and low cost, the dendritic neuron model (DNM) is used as a neuron model to solve complex problems such as nonlinear problems for achieving high-precision models. Although the DNM obtains higher accuracy and effectiveness than the middle layer of the multilayer perceptron in small-scale classification problems, there are no examples that apply it to large-scale classification problems. To achieve better performance for solving practical problems, an approximate Newton-type method-neural network with random weights for the comparison; and three learning algorithms including back-propagation (BP), biogeography-based optimization (BBO), and a competitive swarm optimizer (CSO) are used in the DNM in this experiment. Moreover, three classification problems are solved by using the above learning algorithms to verify their precision and effectiveness in large-scale classification problems. As a consequence, in the case of execution time, DNM + BP is the optimum; DNM + CSO is the best in terms of both accuracy stability and execution time; and considering the stability of comprehensive performance and the convergence rate, DNM + BBO is a wise choice.


Sign in / Sign up

Export Citation Format

Share Document