A heuristic error-feedback learning algorithm for fuzzy modeling

Author(s):  
Ji-Chang Lo ◽  
Chien-Hsing Yang
1996 ◽  
Vol 118 (2) ◽  
pp. 341-346 ◽  
Author(s):  
C. J. Goh ◽  
W. Y. Yan

Conventionally, iterative learning control law is updated using past error information. As a result, the controller is effectively open-loop, apart from being unable to control unstable systems, it also does not share the robustness properties of feedback systems. It is proposed that current error feedback should be used to update the learning control law instead. We present a systematic design procedure based on the H∞ control theory to construct a robust current error feedback learning control law for linear-time-invariant, and possibly unstable, systems. The optimal design will ensure: 1 That the closed-loop system is stable; 2 that the convergence rate is optimum about the nominal plant; 3 robustness in the presence of perturbed or unmodeled dynamics, or nonlinearity.


2020 ◽  
Vol 34 (04) ◽  
pp. 3105-3112 ◽  
Author(s):  
Afshin Abdi ◽  
Faramarz Fekri

In distributed training of deep models, the transmission volume of stochastic gradients (SG) imposes a bottleneck in scaling up the number of processing nodes. On the other hand, the existing methods for compression of SGs have two major drawbacks. First, due to the increase in the overall variance of the compressed SG, the hyperparameters of the learning algorithm must be readjusted to ensure the convergence of the training. Further, the convergence rate of the resulting algorithm still would be adversely affected. Second, for those approaches for which the compressed SG values are biased, there is no guarantee for the learning convergence and thus an error feedback is often required. We propose Quantized Compressive Sampling (QCS) of SG that addresses the above two issues while achieving an arbitrarily large compression gain. We introduce two variants of the algorithm: Unbiased-QCS and MMSE-QCS and show their superior performance w.r.t. other approaches. Specifically, we show that for the same number of communication bits, the convergence rate is improved by a factor of 2 relative to state of the art. Next, we propose to improve the convergence rate of the distributed training algorithm via a weighted error feedback. Specifically, we develop and analyze a method to both control the overall variance of the compressed SG and prevent the staleness of the updates. Finally, through simulations, we validate our theoretical results and establish the superior performance of the proposed SG compression in the distributed training of deep models. Our simulations also demonstrate that our proposed compression method expands substantially the region of step-size values for which the learning algorithm converges.


Author(s):  
WEI LI ◽  
YUPU YANG

In this paper, we propose a novel approach to fast fuzzy modeling based on a new incremental support vector regression (SVR). Firstly a candidate support vectors selection strategy based on kernel Mahalanobis distance measurement is proposed. This strategy is further used to develop a new incremental learning algorithm to speed up the training process of SVR. Then a hybrid kernel function is utilized to represent an SVR model as a TS fuzzy model. Finally a set of fuzzy rules can be directly extracted from the learning results of SVR. Experimental results of two benchmark examples show that the proposed model not only possesses satisfactory accuracy and generalization ability but also costs less computational time.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1596 ◽  
Author(s):  
Huajun Liu ◽  
Liwei Xia ◽  
Cailing Wang

Tracking maneuvering targets is a challenging problem for sensors because of the unpredictability of the target’s motion. Unlike classical statistical modeling of target maneuvers, a simultaneous optimization and feedback learning algorithm for maneuvering target tracking based on the Elman neural network (ENN) is proposed in this paper. In the feedback strategy, a scale factor is learnt to adaptively tune the dynamic model’s error covariance matrix, and in the optimization strategy, a corrected component of the state vector is learnt to refine the final state estimation. These two strategies are integrated in an ENN-based unscented Kalman filter (UKF) model called ELM-UKF. This filter can be trained online by the filter residual, innovation and gain matrix of the UKF to simultaneously achieve maneuver feedback and an optimized estimation. Monte Carlo experiments on synthesized radar data showed that our algorithm had better performance on filtering precision compared with most maneuvering target tracking algorithms.


Sign in / Sign up

Export Citation Format

Share Document