Associative learning in random environments using neural networks

1991 ◽  
Vol 2 (1) ◽  
pp. 20-31 ◽  
Author(s):  
K.S. Narendra ◽  
S. Mukhopadhyay
2021 ◽  
Vol 443 ◽  
pp. 222-234
Author(s):  
Jia Liu ◽  
Wenhua Zhang ◽  
Fang Liu ◽  
Liang Xiao

2004 ◽  
Vol 17 (10) ◽  
pp. 1495
Author(s):  
Misha Tsodyks ◽  
Yeal Adini ◽  
Dov Sagi

2007 ◽  
Vol 362 (1479) ◽  
pp. 449-454 ◽  
Author(s):  
Stefano Ghirlanda ◽  
Magnus Enquist

We show that a simple network model of associative learning can reproduce three findings that arise from particular training and testing procedures in generalization experiments: the effect of (i) ‘errorless learning’, (ii) extinction testing on peak shift, and (iii) the central tendency effect. These findings provide a true test of the network model which was developed to account for other phenomena, and highlight the potential of neural networks to study the phenomena that depend on sequences of experiences with many stimuli. Our results suggest that at least some such phenomena, e.g. stimulus range effects, may derive from basic mechanisms of associative memory rather than from more complex memory processes.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Wentao Wang ◽  
Wei Chen

AbstractBy introducing some parameters perturbed by white noises, we propose a class of stochastic inertial neural networks in random environments. Constructing two Lyapunov–Krasovskii functionals, we establish the mean-square exponential input-to-state stability on the addressed model, which generalizes and refines the recent results. In addition, an example with numerical simulation is carried out to support the theoretical findings.


2008 ◽  
Vol 16 (6) ◽  
pp. 361-384 ◽  
Author(s):  
Eduardo Izquierdo ◽  
Inman Harvey ◽  
Randall D. Beer

Sign in / Sign up

Export Citation Format

Share Document