scholarly journals Stimulus-driven updating of long-term context memories in visual search

Author(s):  
Markus Conci ◽  
Martina Zellin

AbstractVisual search for a target is faster when the spatial layout of nontarget items is repeatedly encountered, illustrating that learned contextual invariances can improve attentional selection (contextual cueing). This type of contextual learning is usually relatively efficient, but relocating the target to an unexpected location (within otherwise unchanged layouts) typically abolishes contextual cueing. Here, we explored whether bottom-up attentional guidance can mediate the efficient contextual adaptation after the change. Two experiments presented an initial learning phase, followed by a subsequent relocation phase that introduced target location changes. This location change was accompanied by transient attention-guiding signals that either up-modulated the changed target location (Experiment 1), or which provided an inhibitory tag to down-modulate the initial target location (Experiment 2). The results from these two experiments showed reliable contextual cueing both before and after the target location change. By contrast, an additional control experiment (Experiment 3) that did not present any attention-guiding signals together with the changed target showed no reliable cueing in the relocation phase, thus replicating previous findings. This pattern of results suggests that attentional guidance (by transient stimulus-driven facilitatory and inhibitory signals) enhances the flexibility of long-term contextual learning.

2019 ◽  
Vol 82 (4) ◽  
pp. 1682-1694
Author(s):  
Siyi Chen ◽  
Zhuanghua Shi ◽  
Xuelian Zang ◽  
Xiuna Zhu ◽  
Leonardo Assumpção ◽  
...  

AbstractIt is well established that statistical learning of visual target locations in relation to constantly positioned visual distractors facilitates visual search. In the present study, we investigated whether such a contextual-cueing effect would also work crossmodally, from touch onto vision. Participants responded to the orientation of a visual target singleton presented among seven homogenous visual distractors. Four tactile stimuli, two to different fingers of each hand, were presented either simultaneously with or prior to the visual stimuli. The identity of the stimulated fingers provided the crossmodal context cue: in half of the trials, a given visual target location was consistently paired with a given tactile configuration. The visual stimuli were presented above the unseen fingers, ensuring spatial correspondence between vision and touch. We found no evidence of crossmodal contextual cueing when the two sets of items (tactile, visual) were presented simultaneously (Experiment 1). However, a reliable crossmodal effect emerged when the tactile distractors preceded the onset of visual stimuli 700 ms (Experiment 2). But crossmodal cueing disappeared again when, after an initial learning phase, participants flipped their hands, making the tactile distractors appear at different positions in external space while their somatotopic positions remained unchanged (Experiment 3). In all experiments, participants were unable to explicitly discriminate learned from novel multisensory arrays. These findings indicate that search-facilitating context memory can be established across vision and touch. However, in order to guide visual search, the (predictive) tactile configurations must be remapped from their initial somatotopic into a common external representational format.


Author(s):  
Angela A. Manginelli ◽  
Franziska Geringswald ◽  
Stefan Pollmann

When distractor configurations are repeated over time, visual search becomes more efficient, even if participants are unaware of the repetition. This contextual cueing is a form of incidental, implicit learning. One might therefore expect that contextual cueing does not (or only minimally) rely on working memory resources. This, however, is debated in the literature. We investigated contextual cueing under either a visuospatial or a nonspatial (color) visual working memory load. We found that contextual cueing was disrupted by the concurrent visuospatial, but not by the color working memory load. A control experiment ruled out that unspecific attentional factors of the dual-task situation disrupted contextual cueing. Visuospatial working memory may be needed to match current display items with long-term memory traces of previously learned displays.


NeuroImage ◽  
2016 ◽  
Vol 124 ◽  
pp. 887-897 ◽  
Author(s):  
Stefan Pollmann ◽  
Jana Eštočinová ◽  
Susanne Sommer ◽  
Leonardo Chelazzi ◽  
Wolf Zinke

2001 ◽  
Vol 54 (4) ◽  
pp. 1105-1124 ◽  
Author(s):  
Yuhong Jiang ◽  
Marvin M. Chun

The effect of selective attention on implicit learning was tested in four experiments using the “contextual cueing” paradigm (Chun & Jiang, 1998, 1999). Observers performed visual search through items presented in an attended colour (e.g., red) and an ignored colour (e.g., green). When the spatial configuration of items in the attended colour was invariant and was consistently paired with a target location, visual search was facilitated, showing contextual cueing (Experiments 1, 3, and 4). In contrast, repeating and pairing the configuration of the ignored items with the target location resulted in no contextual cueing (Experiments 2 and 4). We conclude that implicit learning is robust only when relevant, predictive information is selectively attended.


2020 ◽  
Vol 31 (12) ◽  
pp. 1531-1543
Author(s):  
Artyom Zinchenko ◽  
Markus Conci ◽  
Thomas Töllner ◽  
Hermann J. Müller ◽  
Thomas Geyer

Visual search is facilitated when the target is repeatedly encountered at a fixed position within an invariant (vs. randomly variable) distractor layout—that is, when the layout is learned and guides attention to the target, a phenomenon known as contextual cuing. Subsequently changing the target location within a learned layout abolishes contextual cuing, which is difficult to relearn. Here, we used lateralized event-related electroencephalogram (EEG) potentials to explore memory-based attentional guidance ( N = 16). The results revealed reliable contextual cuing during initial learning and an associated EEG-amplitude increase for repeated layouts in attention-related components, starting with an early posterior negativity (N1pc, 80–180 ms). When the target was relocated to the opposite hemifield following learning, contextual cuing was effectively abolished, and the N1pc was reversed in polarity (indicative of persistent misguidance of attention to the original target location). Thus, once learned, repeated layouts trigger attentional-priority signals from memory that proactively interfere with contextual relearning after target relocation.


2012 ◽  
Vol 24 (12) ◽  
pp. 2281-2291 ◽  
Author(s):  
Eva Zita Patai ◽  
Sonia Doallo ◽  
Anna Christina Nobre

In everyday situations, we often rely on our memories to find what we are looking for in our cluttered environment. Recently, we developed a new experimental paradigm to investigate how long-term memory (LTM) can guide attention and showed how the pre-exposure to a complex scene in which a target location had been learned facilitated the detection of the transient appearance of the target at the remembered location [Summerfield, J. J., Rao, A., Garside, N., & Nobre, A. C. Biasing perception by spatial long-term memory. The Journal of Neuroscience, 31, 14952–14960, 2011; Summerfield, J. J., Lepsien, J., Gitelman, D. R., Mesulam, M. M., & Nobre, A. C. Orienting attention based on long-term memory experience. Neuron, 49, 905–916, 2006]. This study extends these findings by investigating whether and how LTM can enhance perceptual sensitivity to identify targets occurring within their complex scene context. Behavioral measures showed superior perceptual sensitivity (d′) for targets located in remembered spatial contexts. We used the N2pc ERP to test whether LTM modulated the process of selecting the target from its scene context. Surprisingly, in contrast to effects of visual spatial cues or implicit contextual cueing, LTM for target locations significantly attenuated the N2pc potential. We propose that the mechanism by which these explicitly available LTMs facilitate perceptual identification of targets may differ from mechanisms triggered by other types of top–down sources of information.


2019 ◽  
Vol 31 (3) ◽  
pp. 442-452 ◽  
Author(s):  
Artyom Zinchenko ◽  
Markus Conci ◽  
Paul C. J. Taylor ◽  
Hermann J. Müller ◽  
Thomas Geyer

This study investigates the causal contribution of the left frontopolar cortex (FPC) to the processing of violated expectations from learned target–distractor spatial contingencies during visual search. The experiment consisted of two phases: learning and test. Participants searched for targets presented either among repeated or nonrepeated target–distractor configurations. Prior research showed that repeated encounters of identically arranged displays lead to memory about these arrays, which then can come to guide search (contextual cueing effect). The crucial manipulation was a change of the target location, in a nevertheless constant distractor layout, at the transition from learning to test. In addition to this change, we applied repetitive transcranial magnetic stimulation (rTMS) over the left lateral FPC, over a posterior control site, or no rTMS at all (baseline; between-group manipulation) to see how FPC rTMS influences the ability of observers to adapt context-based memories acquired in the training phase. The learning phase showed expedited search in repeated relative to nonrepeated displays, with this context-based facilitation being comparable across all experimental groups. For the test phase, the recovery of cueing was critically dependent on the stimulation site: Although there was evidence of context adaptation toward the end of the experiment in the occipital and no-rTMS conditions, observers with FPC rTMS showed no evidence of relearning at all after target location changes. This finding shows that FPC plays an important role in the regulation of prediction errors in statistical context learning, thus contributing to an update of the spatial target–distractor contingencies after target position changes in learned spatial arrays.


2016 ◽  
Vol 28 (12) ◽  
pp. 1947-1963 ◽  
Author(s):  
Anna Grubert ◽  
Nancy B. Carlisle ◽  
Martin Eimer

The question whether target selection in visual search can be effectively controlled by simultaneous attentional templates for multiple features is still under dispute. We investigated whether multiple-color attentional guidance is possible when target colors remain constant and can thus be represented in long-term memory but not when they change frequently and have to be held in working memory. Participants searched for one, two, or three possible target colors that were specified by cue displays at the start of each trial. In constant-color blocks, the same colors remained task-relevant throughout. In variable-color blocks, target colors changed between trials. The contralateral delay activity (CDA) to cue displays increased in amplitude as a function of color memory load in variable-color blocks, which indicates that cued target colors were held in working memory. In constant-color blocks, the CDA was much smaller, suggesting that color representations were primarily stored in long-term memory. N2pc components to targets were measured as a marker of attentional target selection. Target N2pcs were attenuated and delayed during multiple-color search, demonstrating less efficient attentional deployment to color-defined target objects relative to single-color search. Importantly, these costs were the same in constant-color and variable-color blocks. These results demonstrate that attentional guidance by multiple-feature as compared with single-feature templates is less efficient both when target features remain constant and can be represented in long-term memory and when they change across trials and therefore have to be maintained in working memory.


2019 ◽  
Author(s):  
Eelke Spaak ◽  
Floris P. de Lange

AbstractObservers rapidly and seemingly automatically learn to predict where to expect relevant items when those items are repeatedly presented in the same spatial context. This form of statistical learning in visual search has been studied extensively using a paradigm known as contextual cueing. The neural mechanisms underlying the learning and exploiting of such regularities remain unclear. We sought to elucidate these by examining behaviour and recording neural activity using magneto-encephalography (MEG) while observers were implicitly acquiring and exploiting statistical regularities. Computational modelling of behavioural data suggested that after repeated exposures to a spatial context, participants’ behaviour was marked by an abrupt switch to an exploitation strategy of the learnt regularities. MEG recordings showed that the initial learning phase was associated with larger hippocampal theta band activity for repeated scenes, while the subsequent exploitation phase showed larger prefrontal theta band activity for these repeated scenes. Strikingly, the behavioural benefit of repeated exposures to certain scenes was inversely related to explicit awareness of such repeats, demonstrating the implicit nature of the expectations acquired. This elucidates how theta activity in the hippocampus and prefrontal cortex underpins the implicit learning and exploitation of spatial statistical regularities to optimize visual search behaviour.


Sign in / Sign up

Export Citation Format

Share Document