Neural structures involved in visual search guidance by reward-enhanced contextual cueing of the target location

NeuroImage ◽  
2016 ◽  
Vol 124 ◽  
pp. 887-897 ◽  
Author(s):  
Stefan Pollmann ◽  
Jana Eštočinová ◽  
Susanne Sommer ◽  
Leonardo Chelazzi ◽  
Wolf Zinke
2001 ◽  
Vol 54 (4) ◽  
pp. 1105-1124 ◽  
Author(s):  
Yuhong Jiang ◽  
Marvin M. Chun

The effect of selective attention on implicit learning was tested in four experiments using the “contextual cueing” paradigm (Chun & Jiang, 1998, 1999). Observers performed visual search through items presented in an attended colour (e.g., red) and an ignored colour (e.g., green). When the spatial configuration of items in the attended colour was invariant and was consistently paired with a target location, visual search was facilitated, showing contextual cueing (Experiments 1, 3, and 4). In contrast, repeating and pairing the configuration of the ignored items with the target location resulted in no contextual cueing (Experiments 2 and 4). We conclude that implicit learning is robust only when relevant, predictive information is selectively attended.


2021 ◽  
Vol 12 ◽  
Author(s):  
Lei Zheng ◽  
Jan-Gabriel Dobroschke ◽  
Stefan Pollmann

We investigated if contextual cueing can be guided by egocentric and allocentric reference frames. Combinations of search configurations and external frame orientations were learned during a training phase. In Experiment 1, either the frame orientation or the configuration was rotated, thereby disrupting either the allocentric or egocentric and allocentric predictions of the target location. Contextual cueing survived both of these manipulations, suggesting that it can overcome interference from both reference frames. In contrast, when changed orientations of the external frame became valid predictors of the target location in Experiment 2, we observed contextual cueing as long as one reference frame was predictive of the target location, but contextual cueing was eliminated when both reference frames were invalid. Thus, search guidance in repeated contexts can be supported by both egocentric and allocentric reference frames as long as they contain valid information about the search goal.


2019 ◽  
Vol 31 (3) ◽  
pp. 442-452 ◽  
Author(s):  
Artyom Zinchenko ◽  
Markus Conci ◽  
Paul C. J. Taylor ◽  
Hermann J. Müller ◽  
Thomas Geyer

This study investigates the causal contribution of the left frontopolar cortex (FPC) to the processing of violated expectations from learned target–distractor spatial contingencies during visual search. The experiment consisted of two phases: learning and test. Participants searched for targets presented either among repeated or nonrepeated target–distractor configurations. Prior research showed that repeated encounters of identically arranged displays lead to memory about these arrays, which then can come to guide search (contextual cueing effect). The crucial manipulation was a change of the target location, in a nevertheless constant distractor layout, at the transition from learning to test. In addition to this change, we applied repetitive transcranial magnetic stimulation (rTMS) over the left lateral FPC, over a posterior control site, or no rTMS at all (baseline; between-group manipulation) to see how FPC rTMS influences the ability of observers to adapt context-based memories acquired in the training phase. The learning phase showed expedited search in repeated relative to nonrepeated displays, with this context-based facilitation being comparable across all experimental groups. For the test phase, the recovery of cueing was critically dependent on the stimulation site: Although there was evidence of context adaptation toward the end of the experiment in the occipital and no-rTMS conditions, observers with FPC rTMS showed no evidence of relearning at all after target location changes. This finding shows that FPC plays an important role in the regulation of prediction errors in statistical context learning, thus contributing to an update of the spatial target–distractor contingencies after target position changes in learned spatial arrays.


2018 ◽  
Author(s):  
Bei Zhang ◽  
Fredrik Allenmark ◽  
Heinrich R. Liesefeld ◽  
Zhuanghua Shi ◽  
Hermann J. Müller

ABSTRACTObservers can learn the likely locations of salient distractors in visual search, reducing their potential to capture attention (Ferrante et al., 2018; Sauter et al., 2018a; Wang & Theeuwes, 2018a). While there is agreement that this involves positional suppression of the likely distractor location(s), it is contentious at which stage of search guidance the suppression operates: the supra-dimensional priority map or feature-contrast signals within the distractor dimension. On the latter account, advocated by Sauter et al., target processing should be unaffected by distractor suppression when the target is defined in a different (non-suppressed) dimension to the target. At odds with this, Wang and Theeuwes found strong suppression not only of the (color) distractor, but also of the (shape) target when it appeared at the likely distractor location. Adopting their paradigm, the present study ruled out that increased cross-trial inhibition of the single frequent (frequently inhibited) as compared to any of the rare (rarely inhibited) distractor locations is responsible for this target-location effect. However, a reduced likelihood of the target appearing at the frequent vs. a rare distractor location contributes to this effect: removing this negative bias abolished the cost to target processing with increasing practice, indicative of a transition from priority-map‐ to dimension-based – and thus a flexible locus of – distractor suppression.Public Significance StatementDistraction by a salient visual stimulus outside the ‘focus’ of the task at hand occurs frequently. The present study examined whether and how ‘knowledge’ of the likely location(s) where the distractors occur helps the observer to mitigate distraction. The results confirmed that observers can learn to suppress distracting stimuli at likely locations. Further, they showed that, the suppression may occur at different levels in the hierarchically organized visual system where the priorities of which objects to be attended in the environment are determined.


2019 ◽  
Vol 82 (4) ◽  
pp. 1682-1694
Author(s):  
Siyi Chen ◽  
Zhuanghua Shi ◽  
Xuelian Zang ◽  
Xiuna Zhu ◽  
Leonardo Assumpção ◽  
...  

AbstractIt is well established that statistical learning of visual target locations in relation to constantly positioned visual distractors facilitates visual search. In the present study, we investigated whether such a contextual-cueing effect would also work crossmodally, from touch onto vision. Participants responded to the orientation of a visual target singleton presented among seven homogenous visual distractors. Four tactile stimuli, two to different fingers of each hand, were presented either simultaneously with or prior to the visual stimuli. The identity of the stimulated fingers provided the crossmodal context cue: in half of the trials, a given visual target location was consistently paired with a given tactile configuration. The visual stimuli were presented above the unseen fingers, ensuring spatial correspondence between vision and touch. We found no evidence of crossmodal contextual cueing when the two sets of items (tactile, visual) were presented simultaneously (Experiment 1). However, a reliable crossmodal effect emerged when the tactile distractors preceded the onset of visual stimuli 700 ms (Experiment 2). But crossmodal cueing disappeared again when, after an initial learning phase, participants flipped their hands, making the tactile distractors appear at different positions in external space while their somatotopic positions remained unchanged (Experiment 3). In all experiments, participants were unable to explicitly discriminate learned from novel multisensory arrays. These findings indicate that search-facilitating context memory can be established across vision and touch. However, in order to guide visual search, the (predictive) tactile configurations must be remapped from their initial somatotopic into a common external representational format.


Author(s):  
Markus Conci ◽  
Martina Zellin

AbstractVisual search for a target is faster when the spatial layout of nontarget items is repeatedly encountered, illustrating that learned contextual invariances can improve attentional selection (contextual cueing). This type of contextual learning is usually relatively efficient, but relocating the target to an unexpected location (within otherwise unchanged layouts) typically abolishes contextual cueing. Here, we explored whether bottom-up attentional guidance can mediate the efficient contextual adaptation after the change. Two experiments presented an initial learning phase, followed by a subsequent relocation phase that introduced target location changes. This location change was accompanied by transient attention-guiding signals that either up-modulated the changed target location (Experiment 1), or which provided an inhibitory tag to down-modulate the initial target location (Experiment 2). The results from these two experiments showed reliable contextual cueing both before and after the target location change. By contrast, an additional control experiment (Experiment 3) that did not present any attention-guiding signals together with the changed target showed no reliable cueing in the relocation phase, thus replicating previous findings. This pattern of results suggests that attentional guidance (by transient stimulus-driven facilitatory and inhibitory signals) enhances the flexibility of long-term contextual learning.


Author(s):  
Angela A. Manginelli ◽  
Franziska Geringswald ◽  
Stefan Pollmann

When distractor configurations are repeated over time, visual search becomes more efficient, even if participants are unaware of the repetition. This contextual cueing is a form of incidental, implicit learning. One might therefore expect that contextual cueing does not (or only minimally) rely on working memory resources. This, however, is debated in the literature. We investigated contextual cueing under either a visuospatial or a nonspatial (color) visual working memory load. We found that contextual cueing was disrupted by the concurrent visuospatial, but not by the color working memory load. A control experiment ruled out that unspecific attentional factors of the dual-task situation disrupted contextual cueing. Visuospatial working memory may be needed to match current display items with long-term memory traces of previously learned displays.


Author(s):  
Tobias Rieger ◽  
Lydia Heilmann ◽  
Dietrich Manzey

AbstractVisual inspection of luggage using X-ray technology at airports is a time-sensitive task that is often supported by automated systems to increase performance and reduce workload. The present study evaluated how time pressure and automation support influence visual search behavior and performance in a simulated luggage screening task. Moreover, we also investigated how target expectancy (i.e., targets appearing in a target-often location or not) influenced performance and visual search behavior. We used a paradigm where participants used the mouse to uncover a portion of the screen which allowed us to track how much of the stimulus participants uncovered prior to their decision. Participants were randomly assigned to either a high (5-s time per trial) or a low (10-s time per trial) time-pressure condition. In half of the trials, participants were supported by an automated diagnostic aid (85% reliability) in deciding whether a threat item was present. Moreover, within each half, in target-present trials, targets appeared in a predictable location (i.e., 70% of targets appeared in the same quadrant of the image) to investigate effects of target expectancy. The results revealed better detection performance with low time pressure and faster response times with high time pressure. There was an overall negative effect of automation support because the automation was only moderately reliable. Participants also uncovered a smaller amount of the stimulus under high time pressure in target-absent trials. Target expectancy of target location improved accuracy, speed, and the amount of uncovered space needed for the search.Significance Statement Luggage screening is a safety–critical real-world visual search task which often has to be done under time pressure. The present research found that time pressure compromises performance and increases the risk to miss critical items even with automation support. Moreover, even highly reliable automated support may not improve performance if it does not exceed the manual capabilities of the human screener. Lastly, the present research also showed that heuristic search strategies (e.g., areas where targets appear more often) seem to guide attention also in luggage screening.


Sign in / Sign up

Export Citation Format

Share Document