The ART2 Network Based on Memorizing-Forgetting Mechanism

Author(s):  
Xiaoming Ye ◽  
Xiaozhu Lin ◽  
Xiaojuan Dai
Keyword(s):  
2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Xinran Zhou ◽  
Zijian Liu ◽  
Congxu Zhu

To apply the single hidden-layer feedforward neural networks (SLFN) to identify time-varying system, online regularized extreme learning machine (ELM) with forgetting mechanism (FORELM) and online kernelized ELM with forgetting mechanism (FOKELM) are presented in this paper. The FORELM updates the output weights of SLFN recursively by using Sherman-Morrison formula, and it combines advantages of online sequential ELM with forgetting mechanism (FOS-ELM) and regularized online sequential ELM (ReOS-ELM); that is, it can capture the latest properties of identified system by studying a certain number of the newest samples and also can avoid issue of ill-conditioned matrix inversion by regularization. The FOKELM tackles the problem of matrix expansion of kernel based incremental ELM (KB-IELM) by deleting the oldest sample according to the block matrix inverse formula when samples occur continually. The experimental results show that the proposed FORELM and FOKELM have better stability than FOS-ELM and have higher accuracy than ReOS-ELM in nonstationary environments; moreover, FORELM and FOKELM have time efficiencies superiority over dynamic regression extreme learning machine (DR-ELM) under certain conditions.


2015 ◽  
Vol 2015 ◽  
pp. 1-10
Author(s):  
Yanhuang Jiang ◽  
Qiangli Zhao ◽  
Yutong Lu

Combining several classifiers on sequential chunks of training instances is a popular strategy for data stream mining with concept drifts. This paper introduces human recalling and forgetting mechanisms into a data stream mining system and proposes a Memorizing Based Data Stream Mining (MDSM) model. In this model, each component classifier is regarded as a piece of knowledge that a human obtains through learning some materials and has a memory retention value reflecting its usefulness in the history. The classifiers with high memory retention values are reserved in a “knowledge repository.” When a new data chunk comes, most useful classifiers will be selected (recalled) from the repository and compose the current target ensemble. Based on MDSM, we put forward a new algorithm, MAE (Memorizing Based Adaptive Ensemble), which uses Ebbinghaus forgetting curve as the forgetting mechanism and adopts ensemble pruning as the recalling mechanism. Compared with four popular data stream mining approaches on the datasets with different concept drifts, the experimental results show that MAE achieves high and stable predicting accuracy, especially for the applications with recurring or complex concept drifts. The results also prove the effectiveness of MDSM model.


2015 ◽  
Vol 4 (1) ◽  
Author(s):  
R. Fallahpour ◽  
S. Chakouvari ◽  
H. Askari

AbstractIn this paper, Laplace Adomian decomposition method is utilized for evaluating of spreading model of rumor. Firstly, a succinct review is constructed on the subject of using analytical methods such as Adomian decomposion method, Variational iteration method and Homotopy Analysis method for epidemic models and biomathematics. In continue a spreading model of rumor with consideration of forgetting mechanism is assumed and subsequently LADM is exerted for solving of it. By means of the aforementioned method, a general solution is achieved for this problem which can be readily employed for assessing of rumor model without exerting any computer program. In addition, obtained consequences for this problem are discussed for different cases and parameters. Furthermore, it is shown the method is so straightforward and fruitful for analyzing equations which have complicated terms same as rumor model. By employing numerical methods, it is revealed LADM is so powerful and accurate for eliciting solutions of this model. Eventually, it is concluded that this method is so appropriate for this problem and it can provide researchers a very powerful vehicle for scrutinizing rumor models in diverse kinds of social networks such as Facebook, YouTube, Flickr, LinkedIn and Tuitor.


2020 ◽  
Vol 34 (04) ◽  
pp. 4272-4279
Author(s):  
Ayush Jaiswal ◽  
Daniel Moyer ◽  
Greg Ver Steeg ◽  
Wael AbdAlmageed ◽  
Premkumar Natarajan

We propose a novel approach to achieving invariance for deep neural networks in the form of inducing amnesia to unwanted factors of data through a new adversarial forgetting mechanism. We show that the forgetting mechanism serves as an information-bottleneck, which is manipulated by the adversarial training to learn invariance to unwanted factors. Empirical results show that the proposed framework achieves state-of-the-art performance at learning invariance in both nuisance and bias settings on a diverse collection of datasets and tasks.


Sign in / Sign up

Export Citation Format

Share Document