scholarly journals An entropic associative memory

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Luis A. Pineda ◽  
Gibrán Fuentes ◽  
Rafael Morales

AbstractNatural memories are associative, declarative and distributed, and memory retrieval is a constructive operation. In addition, cues of objects that are not contained in the memory are rejected directly. Symbolic computing memories resemble natural memories in their declarative character, and information can be stored and recovered explicitly; however, they are reproductive rather than constructive, and lack the associative and distributed properties. Sub-symbolic memories developed within the connectionist or artificial neural networks paradigm are associative and distributed, but lack the declarative property, the capability of rejecting objects that are not included in the memory, and memory retrieval is also reproductive. In this paper we present a memory model that sustains the five properties of natural memories. We use Relational-Indeterminate Computing to model associative memory registers that hold distributed representations of individual objects. This mode of computing has an intrinsic computing entropy which measures the indeterminacy of representations. This parameter determines the operational characteristics of the memory. Associative registers are embedded in an architecture that maps concrete images expressed in modality specific buffers into abstract representations and vice versa. The framework has been used to model a visual memory holding the representations of hand-written digits. The system has been tested with a set of memory recognition and retrieval experiments with complete and severely occluded images. The results show that there is a range of entropy values, not too low and not too high, in which associative memory registers have a satisfactory performance. The experiments were implemented in a simulation using a standard computer with a GPU, but a parallel architecture may be built where the memory operations would take a very reduced number of computing steps.

2016 ◽  
Vol 186 ◽  
pp. 44-53 ◽  
Author(s):  
Caigen Zhou ◽  
Xiaoqin Zeng ◽  
Jianjiang Yu ◽  
Haibo Jiang

2021 ◽  
Author(s):  
Rafael Morales ◽  
Nóe Hernández ◽  
Ricardo Cruz ◽  
Victor D. Cruz ◽  
Luis A. Pineda

Abstract Manuscript symbols can be stored, recognized and retrieved from an entropic digital memory that is associative and distributed but yet declarative; memory retrieval is a constructive operation; symbols not contained in the memory are rejected directly without search; and memory operations can be performed through parallel computations. Manuscript symbols, both letters and numerals, are stored in Associative Memory Registers that have an associated entropy. The memory recognition operation obeys an entropy trade-off between precision and recall, and the entropy level impacts on the quality of the objects recovered through the memory retrieval operation. We discuss the operational characteristics of the system for retrieving objects with both complete and incomplete information, such as severe occlusions. The experiments reported in this paper add evidence that supports the scalability of the framework and its potential for developing practical applications. We also compare the present entropic associative memories to Hopfield’s paradigm and discuss its potential for the study of natural memory.


1988 ◽  
Vol 13 (1) ◽  
pp. 74 ◽  
Author(s):  
Sang-Hoon Oh ◽  
Tae-Hoon Yoon ◽  
Jae Chang Kim

2015 ◽  
Vol 162 ◽  
pp. 201-208 ◽  
Author(s):  
Caigen Zhou ◽  
Xiaoqin Zeng ◽  
Haibo Jiang ◽  
Lixin Han

2014 ◽  
pp. 8-15
Author(s):  
M. Kamrul Islam

In neural networks, the associative memory is one in which applying some input pattern leads to the response of a corresponding stored pattern. During the learning phase the memory is fed with a number of input vectors and in the recall phase when some known input is presented to it, the network recalls and reproduces the output vector. Here, we improve and increase the storing ability of the memory model proposed in [1]. We show that there are certain instances where their algorithm can not produce the desired performance by retrieving exactly the correct vector. That is, in their algorithm, a number of output vectors can become activated from the stimulus of an input vector while the desired output is just a single vector. Our proposed solution overcomes this and uniquely determines the output vector as some input vector is applied. Thus we provide a more general scenario of this neural network memory model consisting of Competitive Cooperative Neurons (CCNs).


Sign in / Sign up

Export Citation Format

Share Document