Associate learning and correcting in a memristive neural network

2012 ◽  
Vol 22 (6) ◽  
pp. 1071-1076 ◽  
Author(s):  
Ling Chen ◽  
Chuandong Li ◽  
Xin Wang ◽  
Shukai Duan
2014 ◽  
Vol 36 (12) ◽  
pp. 2577-2586 ◽  
Author(s):  
Si-Wei XIA ◽  
Shu-Kai DUAN ◽  
Li-Dan WANG ◽  
Xiao-Fang HU

2018 ◽  
Vol 93 (4) ◽  
pp. 1823-1840 ◽  
Author(s):  
I. Carro-Pérez ◽  
C. Sánchez-López ◽  
H. G. González-Hernández

2019 ◽  
Vol 5 (6) ◽  
pp. 1800740 ◽  
Author(s):  
Hanchan Song ◽  
Young Seok Kim ◽  
Juseong Park ◽  
Kyung Min Kim

2019 ◽  
Vol 13 (5) ◽  
pp. 475-488 ◽  
Author(s):  
Xun Ji ◽  
Xiaofang Hu ◽  
Yue Zhou ◽  
Zhekang Dong ◽  
Shukai Duan

2001 ◽  
Vol 13 (9) ◽  
pp. 2075-2092 ◽  
Author(s):  
Daniel S. Rizzuto ◽  
Michael J. Kahana

Hebbian heteroassociative learning is inherently asymmetric. Storing a forward association, from item A to item B, enables recall of B (given A), but does not permit recall of A (given B). Recurrent networks can solve this problem by associating A to B and B back to A. In these recurrent networks, the forward and backward associations can be differentially weighted to account for asymmetries in recall performance. In the special case of equal strength forward and backward weights, these recurrent networks can be modeled as a single autoassociative network where A and B are two parts of a single, stored pattern. We analyze a general, recurrent neural network model of associative memory and examine its ability to fit a rich set of experimental data on human associative learning. The model fits the data significantly better when the forward and backward storage strengths are highly correlated than when they are less correlated. This network-based analysis of associative learning supports the view that associations between symbolic elements are better conceptualized as a blending of two ideas into a single unit than as separately modifiable forward and backward associations linking representations in memory.


Sign in / Sign up

Export Citation Format

Share Document