scholarly journals Impulse responses reveal unimodal and bimodal access to visual and auditory working memory

2019 ◽  
Author(s):  
M. J. Wolff ◽  
G. Kandemir ◽  
M. G. Stokes ◽  
E. G. Akyürek

AbstractIt is unclear to what extent sensory processing areas are involved in the maintenance of sensory information in working memory (WM). Previous studies have thus far relied on finding neural activity in the corresponding sensory cortices, neglecting potential activity-silent mechanisms such as connectivity-dependent encoding. It has recently been found that visual stimulation during visual WM maintenance reveals WM-dependent changes through a bottom-up neural response. Here, we test whether this impulse response is uniquely visual and sensory-specific. Human participants (both sexes) completed visual and auditory WM tasks while electroencephalography was recorded. During the maintenance period, the WM network was perturbed serially with fixed and task-neutral auditory and visual stimuli. We show that a neutral auditory impulse-stimulus presented during the maintenance of a pure tone resulted in a WM-dependent neural response, providing evidence for the auditory counterpart to the visual WM findings reported previously. Interestingly, visual stimulation also resulted in an auditory WM-dependent impulse response, implicating the visual cortex in the maintenance of auditory information, either directly, or indirectly as a pathway to the neural auditory WM representations elsewhere. In contrast, during visual WM maintenance only the impulse response to visual stimulation was content-specific, suggesting that visual information is maintained in a sensory-specific neural network, separated from auditory processing areas.Significance StatementWorking memory is a crucial component of intelligent, adaptive behaviour. Our understanding of the neural mechanisms that support it has recently shifted: rather than being dependent on an unbroken chain of neural activity, working memory may rely on transient changes in neuronal connectivity, which can be maintained efficiently in activity-silent brain states. Previous work using a visual impulse stimulus to perturb the memory network has implicated such silent states in the retention of line orientations in visual working memory. Here, we show that auditory working memory similarly retains auditory information. We also observed a sensory-specific impulse response in visual working memory, while auditory memory responded bi-modally to both visual and auditory impulses, possibly reflecting visual dominance of working memory.

2019 ◽  
Author(s):  
Elio Balestrieri ◽  
Luca Ronconi ◽  
David Melcher

AbstractAttention and Visual Working Memory (VWM) are among the most theoretically detailed and empirically tested constructs in human cognition. Nevertheless, the nature of the interrelation between selective attention and VWM still presents a fundamental controversy: do they rely on the same cognitive resources or not? The present study aims at disentangling this issue by capitalizing on recent evidence showing that attention is a rhythmic phenomenon, oscillating over short time windows. Using a dual-task approach, we combined a classic VWM task with a detection task in which we densely sampled detection performance during the time between the memory and the test array. Our results show that an increment in VWM load was related to a worse detection of near threshold visual stimuli and, importantly, to the presence of an oscillatory pattern in detection performance at ∼5 Hz. Furthermore, our findings suggest that the frequency of this sampling rhythm changes according to the strategic allocation of attentional resources to either the VWM or the detection task. This pattern of results is consistent with a central sampling attentional rhythm which allocates shared attentional resources both to the flow of external visual stimulation and also to the internal maintenance of visual information.


Author(s):  
Antonio Prieto ◽  
Vanesa Peinado ◽  
Julia Mayas

AbstractVisual working memory has been defined as a system of limited capacity that enables the maintenance and manipulation of visual information. However, some perceptual features like Gestalt grouping could improve visual working memory effectiveness. In two different experiments, we aimed to explore how the presence of elements grouped by color similarity affects the change detection performance of both, grouped and non-grouped items. We combined a change detection task with a retrocue paradigm in which a six item array had to be remembered. An always valid, variable-delay retrocue appeared in some trials during the retention interval, either after 100 ms (iconic-trace period) or 1400 ms (working memory period), signaling the location of the probe. The results indicated that similarity grouping biased the information entered into the visual working memory, improving change detection accuracy only for previously grouped probes, but hindering change detection for non-grouped probes in certain conditions (Exp. 1). However, this bottom-up automatic encoding bias was overridden when participants were explicitly instructed to ignore grouped items as they were irrelevant for the task (Exp. 2).


2021 ◽  
Vol 33 (5) ◽  
pp. 902-918 ◽  
Author(s):  
Isabel E. Asp ◽  
Viola S. Störmer ◽  
Timothy F. Brady

Abstract Almost all models of visual working memory—the cognitive system that holds visual information in an active state—assume it has a fixed capacity: Some models propose a limit of three to four objects, where others propose there is a fixed pool of resources for each basic visual feature. Recent findings, however, suggest that memory performance is improved for real-world objects. What supports these increases in capacity? Here, we test whether the meaningfulness of a stimulus alone influences working memory capacity while controlling for visual complexity and directly assessing the active component of working memory using EEG. Participants remembered ambiguous stimuli that could either be perceived as a face or as meaningless shapes. Participants had higher performance and increased neural delay activity when the memory display consisted of more meaningful stimuli. Critically, by asking participants whether they perceived the stimuli as a face or not, we also show that these increases in visual working memory capacity and recruitment of additional neural resources are because of the subjective perception of the stimulus and thus cannot be driven by physical properties of the stimulus. Broadly, this suggests that the capacity for active storage in visual working memory is not fixed but that more meaningful stimuli recruit additional working memory resources, allowing them to be better remembered.


2020 ◽  
pp. 311-332
Author(s):  
Nicole Hakim ◽  
Edward Awh ◽  
Edward K. Vogel

Visual working memory allows us to maintain information in mind for use in ongoing cognition. Research on visual working memory often characterizes it within the context of its interaction with long-term memory (LTM). These embedded-processes models describe memory representations as existing in three potential states: inactivated LTM, including all representations stored in LTM; activated LTM, latent representations that can quickly be brought into an active state due to contextual priming or recency; and the focus of attention, an active but sharply limited state in which only a small number of items can be represented simultaneously. This chapter extends the embedded-processes framework of working memory. It proposes that working memory should be defined operationally based on neural activity. By defining working memory in this way, the important theoretical distinction between working memory and LTM is maintained, while still acknowledging that they operate together. It is additionally proposed that active working memory should be further subdivided into at least two subcomponent processes that index item-based storage and currently prioritized spatial locations. This fractionation of working memory is based on recent research that has found that the maintenance of information distinctly relies on item-based representations as well as prioritization of spatial locations. It is hoped that this updated framework of the definition of working memory within the embedded-processes model provides further traction for understanding how we maintain information in mind.


2020 ◽  
Vol 30 (9) ◽  
pp. 4759-4770
Author(s):  
Maro G Machizawa ◽  
Jon Driver ◽  
Takeo Watanabe

Abstract Visual working memory (VWM) refers to our ability to selectively maintain visual information in a mental representation. While cognitive limits of VWM greatly influence a variety of mental operations, it remains controversial whether the quantity or quality of representations in mind constrains VWM. Here, we examined behavior-to-brain anatomical relations as well as brain activity to brain anatomy associations with a “neural” marker specific to the retention interval of VWM. Our results consistently indicated that individuals who maintained a larger number of items in VWM tended to have a larger gray matter (GM) volume in their left lateral occipital region. In contrast, individuals with a superior ability to retain with high precision tended to have a larger GM volume in their right parietal lobe. These results indicate that individual differences in quantity and quality of VWM may be associated with regional GM volumes in a dissociable manner, indicating willful integration of information in VWM may recruit separable cortical subsystems.


2015 ◽  
Vol 231 (1) ◽  
pp. 33-41 ◽  
Author(s):  
Christian Knöchel ◽  
Viola Oertel-Knöchel ◽  
Robert Bittner ◽  
Michael Stäblein ◽  
Vera Heselhaus ◽  
...  

2020 ◽  
Author(s):  
Timothy F. Brady ◽  
Viola S. Störmer

Visual working memory is a capacity-limited cognitive system used to actively store and manipulate visual information. Visual working memory capacity is not fixed, but varies by stimulus type: stimuli that are more meaningful are better remembered. In the current work, we investigate what conditions lead to the strongest benefits for meaningful stimuli. We propose that in some situations, participants may be prone to try to encode the entire display holistically (i.e., in a quick ‘snapshot’), encouraging participants to treat objects simply as meaningless colored ‘blobs’, rather than processing them individually and in a high-level way, which could reduce benefits for meaningful stimuli. In a series of experiments we directly test whether real-world objects, colors, perceptually-matched less-meaningful objects, and fully scrambled objects benefit from deeper processing. We systematically vary the presentation format of stimuli at encoding to be either simultaneous — encouraging a parallel, ‘take-a-quick-snapshot’ strategy — or present the stimuli sequentially, promoting a serial, each-item-at-once strategy. We find large advantages for meaningful objects in all conditions, but find that real-world objects — and to a lesser degree lightly scrambled, still meaningful versions of the objects — benefit from the sequential encoding and thus deeper, focused-on-individual-items processing, while colors do not. Our results suggest single feature objects may be an outlier in their affordance of parallel, quick processing, and that in more realistic memory situations, visual working memory likely relies upon representations resulting from in-depth processing of objects (e.g., in higher-level visual areas) rather than solely being represented in terms of their low-level features.


2021 ◽  
Author(s):  
Catherine V Barnes ◽  
Lara Roesler ◽  
Michael Schaum ◽  
Carmen Schiweck ◽  
Benjamin Peters ◽  
...  

Objective: People with schizophrenia (PSZ) are impaired in the attentional prioritization of non-salient but relevant stimuli over salient but irrelevant distractors during visual working memory (VWM) encoding. Conversely, the guidance of top-down attention by external predictive cues is intact. Yet, it is unknown whether this preserved ability can help PSZ overcome impaired attentional prioritization in the presence of salient distractors. Methods: We employed a visuospatial change-detection task using four Gabor Patches with differing orientations in 69 PSZ and 74 healthy controls (HCS). Two patches flickered to reflect saliency and either a predictive or a non-predictive cue was displayed resulting in four conditions. Results: Across all conditions, PSZ stored significantly less information in VWM than HCS (all p < 0.001). With a non-predictive cue, PSZ stored significantly more salient than non-salient information (t140 = 5.66, p < 0.001, dt = 0.5). With a predictive cue, PSZ stored significantly more non-salient information (t140 = 5.70, p < 0.001, dt = 0.5). Conclusion: Our findings support a bottom-up bias in schizophrenia with performance significantly better for visually salient information in the absence of a predictive cue. These results indicate that bottom-up attentional prioritization is disrupted in schizophrenia, but the top-down utilization of cues is intact. We conclude that additional top-down information significantly improves performance in PSZ when non-salient visual information needs to be encoded in working memory.


Sign in / Sign up

Export Citation Format

Share Document