scholarly journals A two-level hierarchical framework of visual working memory

2017 ◽  
Author(s):  
Tal Yatziv ◽  
Yoav Kessler

Over the last couple of decades, a vast amount of research has been dedicated to understanding the nature and the architecture of visual short-term memory (VSTM), the mechanism by which currently relevant visual information is maintained. According to discrete-capacity models, VSTM is constrained by a limited number of discrete representations held simultaneously. In contrast, shared-resource models regard VSTM as limited in resources, which can be distributed flexibly between varying numbers of representations, and a new interference model posits that capacity is limited by interference among items. In this paper, we begin by reviewing benchmark findings regarding the debate over VSTM limitations, focusing on whether VSTM storage is all-or-none, and on whether objects’ complexity affects capacity. Afterwards, we put forward a hybrid framework of VSTM architecture, arguing that this system is composed of a two-level hierarchy of memory stores, each containing a different set of representations: (1) Perceptual Memory (PM), a resource-like level containing analog automatically-formed representations of visual stimuli in varying degrees of activation, and (2) visual Working Memory (WM), in which a subset of 3-4 items from PM are bound to conceptual representations and to their locations, thus conveying discrete (digital/symbolic) information which appears quantized. While PM has a large capacity and is relatively non-selective, visual WM is restricted in the number of items that can be maintained simultaneously and its content is regulated by a gating mechanism.

2013 ◽  
Author(s):  
Robert H. Logie ◽  
Mario Parra ◽  
Stephen Rhodes ◽  
Elaine Niven ◽  
Richard Allen ◽  
...  

2002 ◽  
Vol 55 (3) ◽  
pp. 753-774 ◽  
Author(s):  
Jackie Andrade ◽  
Eva Kemps ◽  
Yves Werniers ◽  
Jon May ◽  
Arnaud Szmalec

Several authors have hypothesized that visuo-spatial working memory is functionally analogous to verbal working memory. Irrelevant background speech impairs verbal short-term memory. We investigated whether irrelevant visual information has an analogous effect on visual short-term memory, using a dynamic visual noise (DVN) technique known to disrupt visual imagery (Quinn & McConnell, 1996b). Experiment 1 replicated the effect of DVN on pegword imagery. Experiments 2 and 3 showed no effect of DVN on recall of static matrix patterns, despite a significant effect of a concurrent spatial tapping task. Experiment 4 showed no effect of DVN on encoding or maintenance of arrays of matrix patterns, despite testing memory by a recognition procedure to encourage visual rather than spatial processing. Serial position curves showed a one-item recency effect typical of visual short-term memory. Experiment 5 showed no effect of DVN on short-term recognition of Chinese characters, despite effects of visual similarity and a concurrent colour memory task that confirmed visual processing of the characters. We conclude that irrelevant visual noise does not impair visual short-term memory. Visual working memory may not be functionally analogous to verbal working memory, and different cognitive processes may underlie visual short-term memory and visual imagery.


2011 ◽  
Vol 49 (6) ◽  
pp. 1559-1568 ◽  
Author(s):  
Annelinde R.E. Vandenbroucke ◽  
Ilja G. Sligte ◽  
Victor A.F. Lamme

2001 ◽  
Vol 24 (1) ◽  
pp. 139-141 ◽  
Author(s):  
Antonino Raffone ◽  
Gezinus Wolters ◽  
Jacob M. Murre

We suggest a neurophysiological account of the short-term memory capacity limit based on a model of visual working memory (Raffone & Wolters, in press). Simulations have revealed a critical capacity limit of about four independent patterns. The model mechanisms may be applicable to working memory in general and they allow a reinterpretation of some of the issues discussed by Cowan.


2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Milos Antonijevic ◽  
Miodrag Zivkovic ◽  
Sladjana Arsic ◽  
Aleksandar Jevremovic

Visual short-term memory (VSTM) is defined as the ability to remember a small amount of visual information, such as colors and shapes, during a short period of time. VSTM is a part of short-term memory, which can hold information up to 30 seconds. In this paper, we present the results of research where we classified the data gathered by using an electroencephalogram (EEG) during a VSTM experiment. The experiment was performed with 12 participants that were required to remember as many details as possible from the two images, displayed for 1 minute. The first assessment was done in an isolated environment, while the second assessment was done in front of the other participants, in order to increase the stress of the examinee. The classification of the EEG data was done by using four algorithms: Naive Bayes, support vector, KNN, and random forest. The results obtained show that AI-based classification could be successfully used in the proposed way, since we were able to correctly classify the order of the images presented 90.12% of the time and type of the displayed image 90.51% of the time.


2020 ◽  
Vol 117 (51) ◽  
pp. 32329-32339
Author(s):  
Jing Liu ◽  
Hui Zhang ◽  
Tao Yu ◽  
Duanyu Ni ◽  
Liankun Ren ◽  
...  

Visual short-term memory (VSTM) enables humans to form a stable and coherent representation of the external world. However, the nature and temporal dynamics of the neural representations in VSTM that support this stability are barely understood. Here we combined human intracranial electroencephalography (iEEG) recordings with analyses using deep neural networks and semantic models to probe the representational format and temporal dynamics of information in VSTM. We found clear evidence that VSTM maintenance occurred in two distinct representational formats which originated from different encoding periods. The first format derived from an early encoding period (250 to 770 ms) corresponded to higher-order visual representations. The second format originated from a late encoding period (1,000 to 1,980 ms) and contained abstract semantic representations. These representational formats were overall stable during maintenance, with no consistent transformation across time. Nevertheless, maintenance of both representational formats showed substantial arrhythmic fluctuations, i.e., waxing and waning in irregular intervals. The increases of the maintained representational formats were specific to the phases of hippocampal low-frequency activity. Our results demonstrate that human VSTM simultaneously maintains representations at different levels of processing, from higher-order visual information to abstract semantic representations, which are stably maintained via coupling to hippocampal low-frequency activity.


1968 ◽  
Vol 27 (3_suppl) ◽  
pp. 1155-1158 ◽  
Author(s):  
Daniel N. Robinson

Ss were exposed to discontinuously presented signals in a compensatory tracking task. Signals were “on” for durations of 16.7, 50, 150, 300, or 500 msec. followed by “off” periods of the same durations. From measures of tracking accuracy under the various on-off combinations, the following conclusions emerge: (a) most of the utilizable visual information is present in the first 15 to 50 msec.; (b) the short-term storage capacity, i.e., the temporal range over which the system can “coast” without input, extends to at least 300 msec.; (c) measures taken under stimulating conditions of long duration and time-varying characteristics result in different assessments of visual short-term memory than those obtained under two-flash (transient response) conditions.


Author(s):  
Paul Zerr ◽  
Surya Gayet ◽  
Floris van den Esschert ◽  
Mitchel Kappen ◽  
Zoril Olah ◽  
...  

AbstractAccessing the contents of visual short-term memory (VSTM) is compromised by information bottlenecks and visual interference between memorization and recall. Retro-cues, displayed after the offset of a memory stimulus and prior to the onset of a probe stimulus, indicate the test item and improve performance in VSTM tasks. It has been proposed that retro-cues aid recall by transferring information from a high-capacity memory store into visual working memory (multiple-store hypothesis). Alternatively, retro-cues could aid recall by redistributing memory resources within the same (low-capacity) working memory store (single-store hypothesis). If retro-cues provide access to a memory store with a capacity exceeding the set size, then, given sufficient training in the use of the retro-cue, near-ceiling performance should be observed. To test this prediction, 10 observers each performed 12 hours across 8 sessions in a retro-cue change-detection task (40,000+ trials total). The results provided clear support for the single-store hypothesis: retro-cue benefits (difference between a condition with and without retro-cues) emerged after a few hundred trials and then remained constant throughout the testing sessions, consistently improving performance by two items, rather than reaching ceiling performance. Surprisingly, we also observed a general increase in performance throughout the experiment in conditions with and without retro-cues, calling into question the generalizability of change-detection tasks in assessing working memory capacity as a stable trait of an observer (data and materials are available at osf.io/9xr82 and github.com/paulzerr/retrocues). In summary, the present findings suggest that retro-cues increase capacity estimates by redistributing memory resources across memoranda within a low-capacity working memory store.


Sign in / Sign up

Export Citation Format

Share Document