Building Blocks of Visual Working Memory: Objects or Boolean Maps?

2013 ◽  
Vol 25 (5) ◽  
pp. 743-753 ◽  
Author(s):  
Mowei Shen ◽  
Wenjun Yu ◽  
Xiaotian Xu ◽  
Zaifeng Gao

The nature of the building blocks of information in visual working memory (VWM) is a fundamental issue that has not been well resolved. Most researchers take objects as the building blocks, although this perspective has received criticism. The objects could be physically separated ones (strict object hypothesis) or hierarchical objects created from separated individuals (broad object hypothesis). Meanwhile, a newly proposed Boolean map theory for visual attention suggests that Boolean maps may be the building blocks of VWM (Boolean map hypothesis); this perspective could explain many critical findings of VWM. However, no previous study has examined these hypotheses. We explored this issue by focusing on a critical point on which they make distinct predictions. We asked participants to remember two distinct objects (2-object), three distinct objects (3-object), or three objects with repeated information (mixed-3-object, e.g., one red bar and two green bars, green bars could be represented as one hierarchical object) and adopted contralateral delay activity (CDA) to tap into the maintenance phase of VWM. The mixed-3-object condition could generate two Boolean maps, three objects, or three objects most of the time (hierarchical objects are created in certain trials, retaining two objects). Simple orientations (Experiment 1) and colors (Experiments 2 and 3) were used as stimuli. Although the CDA of the mixed-3-object condition was slightly lower than that of the 3-object condition, no significant difference was revealed between them. Both conditions displayed significantly higher CDAs than the 2-object condition. These findings support the broad object hypothesis. We further suggest that Boolean maps might be the unit for retrieval/comparison in VWM.

Author(s):  
Christian Merkel ◽  
Mandy Viktoria Bartsch ◽  
Mircea A Schoenfeld ◽  
Anne-Katrin Vellage ◽  
Notger G Müller ◽  
...  

Visual working memory (VWM) is an active representation enabling the manipulation of item information even in the absence of visual input. A common way to investigate VWM is to analyze the performance at later recall. This approach, however, leaves uncertainties about whether the variation of recall performance is attributable to item encoding and maintenance or to the testing of memorized information. Here, we record the contralateral delay activity (CDA) - an established electrophysiological measure of item storage and maintenance - in human subjects performing a delayed orientation precision estimation task. This allows us to link the fluctuation of recall precision directly to the process of item encoding and maintenance. We show that for two sequentially encoded orientation items, the CDA amplitude reflects the precision of orientation recall of both items, with higher precision being associated with a larger amplitude. Furthermore, we show that the CDA amplitude for each item varies independently from each other, suggesting that the precision of memory representations fluctuates independently.


2020 ◽  
Author(s):  
Timothy F. Brady ◽  
Viola S. Störmer ◽  
Anna Shafer-Skelton ◽  
Jamal Rodgers Williams ◽  
Angus F. Chapman ◽  
...  

Both visual attention and visual working memory tend to be studied with very simple stimuli and low-level paradigms, designed to allow us to understand the representations and processes in detail, or with fully realistic stimuli that make such precise understanding difficult but are more representative of the real world. In this chapter we argue for an intermediate approach in which visual attention and visual working memory are studied by scaling up from the simplest settings to more complex settings that capture some aspects of the complexity of the real-world, while still remaining in the realm of well-controlled stimuli and well-understood tasks. We believe this approach, which we have been taking in our labs, will allow a more generalizable set of knowledge about visual attention and visual working memory while maintaining the rigor and control that is typical of vision science and psychophysics studies.


2021 ◽  
pp. 1-55
Author(s):  
Jeffrey Frederic Queisser ◽  
Minju Jung ◽  
Takazumi Matsumoto ◽  
Jun Tani

Abstract Generalization by learning is an essential cognitive competency for humans. For example, we can manipulate even unfamiliar objects and can generate mental images before enacting a preplan. How is this possible? Our study investigated this problem by revisiting our previous study (Jung, Matsumoto, & Tani, 2019), which examined the problem of vision-based, goal-directed planning by robots performing a task of block stacking. By extending the previous study, our work introduces a large network comprising dynamically interacting submodules, including visual working memory (VWMs), a visual attention module, and an executive network. The executive network predicts motor signals, visual images, and various controls for attention, as well as masking of visual information. The most significant difference from the previous study is that our current model contains an additional VWM. The entire network is trained by using predictive coding and an optimal visuomotor plan to achieve a given goal state is inferred using active inference. Results indicate that our current model performs significantly better than that used in Jung et al. (2019), especially when manipulating blocks with unlearned colors and textures. Simulation results revealed that the observed generalization was achieved because content-agnostic information processing developed through synergistic interaction between the second VWM and other modules during the course of learning, in which memorizing image contents and transforming them are dissociated. This letter verifies this claim by conducting both qualitative and quantitative analysis of simulation results.


2019 ◽  
Vol 85 (10) ◽  
pp. S285-S286
Author(s):  
Brian Coffman ◽  
Tim Murphy ◽  
Gretchen Haas ◽  
Carl Olson ◽  
Raymond Y. Cho ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document