Incremental learning with a homeostatic self-organizing neural model

2019 ◽  
Vol 32 (24) ◽  
pp. 18101-18121
Author(s):  
Alexander Gepperth
Author(s):  
Junpei Zhong ◽  
Angelo Cangelosi ◽  
Stefan Wermter
Keyword(s):  

Author(s):  
Shogo Okada ◽  
◽  
Osamu Hasegawa

We segments and symbolizes image information on a series of human behavior as an aggregate unit of motions in a self-organizing manner and proposes a system that recognizes the entire behavior as a symbol string. This system symbolizes the motion unit incrementally and also generates motion from a symbol. To implement the system, we used a mixture of experts with a non-monotonous recurrent neural network used as the expert and our own DP matching method. In addition, our proposal makes not only teacher-labeled patterns, but also teacher-unlabeled patterns available for learning. By using this function, we proposed semi-supervised learning using our proposal in this paper. We verified the evaluation of the effectiveness of our proposal and semi-supervised learning function by two experiments using moving images including seven gestures.


2013 ◽  
Vol 756-759 ◽  
pp. 3330-3335
Author(s):  
Ji Fu Nong

We propose a new self-organizing neural model that performs principal components analysis. It is also related to the adaptive subspace self-organizing map (ASSOM) network, but its training equations are simpler. Experimental results are reported, which show that the new model has better performance than the ASSOM network.


2001 ◽  
Vol 13 (1) ◽  
pp. 18-30 ◽  
Author(s):  
Zach Solan ◽  
Eytan Ruppin

This paper presents a neural model of similarity perception in identification tasks. It is based on self-organizing maps and population coding and is examined through five different identification experiments. Simulating an identification task, the neural model generates a confusion matrix that can be compared directly with that of human subjects. The model achieves a fairly accurate match with the pertaining experimental data both during training and thereafter. To achieve this fit, we find that the entire activity in the network should decline while learning the identification task, and that the population encoding of the specific stimuli should become sparse as the network organizes. Our results, thus, suggest that a self-organizing neural model employing population coding can account for identification processing while suggesting computational constraints on the underlying cortical networks.


Sign in / Sign up

Export Citation Format

Share Document