scholarly journals Task-Irrelevant Context Learned Under Rapid Display Presentation: Selective Attention in Associative Blocking

2021 ◽  
Vol 12 ◽  
Author(s):  
Xuelian Zang ◽  
Leonardo Assumpção ◽  
Jiao Wu ◽  
Xiaowei Xie ◽  
Artyom Zinchenko

In the contextual cueing task, visual search is faster for targets embedded in invariant displays compared to targets found in variant displays. However, it has been repeatedly shown that participants do not learn repeated contexts when these are irrelevant to the task. One potential explanation lays in the idea of associative blocking, where salient cues (task-relevant old items) block the learning of invariant associations in the task-irrelevant subset of items. An alternative explanation is that the associative blocking rather hinders the allocation of attention to task-irrelevant subsets, but not the learning per se. The current work examined these two explanations. In two experiments, participants performed a visual search task under a rapid presentation condition (300 ms) in Experiment 1, or under a longer presentation condition (2,500 ms) in Experiment 2. In both experiments, the search items within both old and new displays were presented in two colors which defined the irrelevant and task-relevant items within each display. The participants were asked to search for the target in the relevant subset in the learning phase. In the transfer phase, the instructions were reversed and task-irrelevant items became task-relevant (and vice versa). In line with previous studies, the search of task-irrelevant subsets resulted in no cueing effect post-transfer in the longer presentation condition; however, a reliable cueing effect was generated by task-irrelevant subsets learned under the rapid presentation. These results demonstrate that under rapid display presentation, global attentional selection leads to global context learning. However, under a longer display presentation, global attention is blocked, leading to the exclusive learning of invariant relevant items in the learning session.

2020 ◽  
Vol 11 ◽  
Author(s):  
Xiaowei Xie ◽  
Siyi Chen ◽  
Xuelian Zang

In contextual cueing, previously encountered context tends to facilitate the detection of the target embedded in it than when the target appears in a novel context. In this study, we investigated whether the contextual cueing could develop at early time when the search display was presented briefly. In four experiments, participants searched for a target T in an array of distractor Ls. The results showed that with a rather short presentation time of the search display, participants were able to learn the spatial context and speeded up their response time overall, with the learning effect lasting for a long period. Specifically, the contextual cueing effect was observed either with or without a mask after a duration of 300-ms presentation of the search display. Such a context learning under rapid presentation could not operate only with the local context information repeated, thus suggesting that a global context was required to guide spatial attention when the viewing time of the search display was limited. Overall, these findings indicate that contextual cueing might arise at an “early,” target selection stage and that the global context is necessary for the context learning under rapid presentation to function.


PLoS ONE ◽  
2015 ◽  
Vol 10 (4) ◽  
pp. e0124190 ◽  
Author(s):  
Chia-huei Tseng ◽  
Li Jingling

2017 ◽  
Vol 40 ◽  
Author(s):  
Linda Henriksson ◽  
Riitta Hari

AbstractA framework where only the size of the functional visual field of fixations can vary is hardly able to explain natural visual-search behavior. In real-world search tasks, context guides eye movements, and task-irrelevant social stimuli may capture the gaze.


2019 ◽  
Vol 31 (3) ◽  
pp. 442-452 ◽  
Author(s):  
Artyom Zinchenko ◽  
Markus Conci ◽  
Paul C. J. Taylor ◽  
Hermann J. Müller ◽  
Thomas Geyer

This study investigates the causal contribution of the left frontopolar cortex (FPC) to the processing of violated expectations from learned target–distractor spatial contingencies during visual search. The experiment consisted of two phases: learning and test. Participants searched for targets presented either among repeated or nonrepeated target–distractor configurations. Prior research showed that repeated encounters of identically arranged displays lead to memory about these arrays, which then can come to guide search (contextual cueing effect). The crucial manipulation was a change of the target location, in a nevertheless constant distractor layout, at the transition from learning to test. In addition to this change, we applied repetitive transcranial magnetic stimulation (rTMS) over the left lateral FPC, over a posterior control site, or no rTMS at all (baseline; between-group manipulation) to see how FPC rTMS influences the ability of observers to adapt context-based memories acquired in the training phase. The learning phase showed expedited search in repeated relative to nonrepeated displays, with this context-based facilitation being comparable across all experimental groups. For the test phase, the recovery of cueing was critically dependent on the stimulation site: Although there was evidence of context adaptation toward the end of the experiment in the occipital and no-rTMS conditions, observers with FPC rTMS showed no evidence of relearning at all after target location changes. This finding shows that FPC plays an important role in the regulation of prediction errors in statistical context learning, thus contributing to an update of the spatial target–distractor contingencies after target position changes in learned spatial arrays.


Perception ◽  
10.1068/p5135 ◽  
2003 ◽  
Vol 32 (11) ◽  
pp. 1351-1358 ◽  
Author(s):  
Tomohiro Nabeta ◽  
Fuminori Ono ◽  
Jun-Ichiro Kawahara

Under incidental learning conditions, spatial layouts can be acquired implicitly and facilitate visual search (contextual-cueing effect). We examined whether the contextual-cueing effect is specific to the visual modality or transfers to the haptic modality. The participants performed 320 (experiment 1) or 192 (experiment 2) visual search trials based on a typical contextual-cueing paradigm, followed by haptic search trials in which half of the trials had layouts used in the previous visual search trials. The visual contextual-cueing effect was obtained in the learning phase. More importantly, the effect was transferred from visual to haptic searches; there was greater facilitation of haptic search trials when the spatial layout was the same as in the previous visual search trials, compared with trials in which the spatial layout differed from those in the visual search. This suggests the commonality of spatial memory to allocate focused attention in both visual and haptic modalities.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Siyi Chen ◽  
Zhuanghua Shi ◽  
Hermann J. Müller ◽  
Thomas Geyer

AbstractDoes multisensory distractor-target context learning enhance visual search over and above unisensory learning? To address this, we had participants perform a visual search task under both uni- and multisensory conditions. Search arrays consisted of one Gabor target that differed from three homogeneous distractors in orientation; participants had to discriminate the target’s orientation. In the multisensory session, additional tactile (vibration-pattern) stimulation was delivered to two fingers of each hand, with the odd-one-out tactile target and the distractors co-located with the corresponding visual items in half the trials; the other half presented the visual array only. In both sessions, the visual target was embedded within identical (repeated) spatial arrangements of distractors in half of the trials. The results revealed faster response times to targets in repeated versus non-repeated arrays, evidencing ‘contextual cueing’. This effect was enhanced in the multisensory session—importantly, even when the visual arrays presented without concurrent tactile stimulation. Drift–diffusion modeling confirmed that contextual cueing increased the rate at which task-relevant information was accumulated, as well as decreasing the amount of evidence required for a response decision. Importantly, multisensory learning selectively enhanced the evidence-accumulation rate, expediting target detection even when the context memories were triggered by visual stimuli alone.


2018 ◽  
Author(s):  
Anne Schmidt ◽  
Franziska Geringswald ◽  
Fariba Sharifian ◽  
Stefan Pollmann

AbstractWe tested if high-level athletes or action video game players have superior context learning skills. Incidental context learning was tested in a spatial contextual cueing paradigm. We found comparable contextual cueing of visual search in repeated displays in high-level amateur handball players, dedicated action video game players and normal controls. In contrast, both handball players and action video game players showed faster search than controls, measured as search time per display item, independent of display repetition. Thus, our data do not indicate superior context learning skills in athletes or action video game players. Rather, both groups showed more efficient visual search in abstract displays that were not related to sport-specific situations.


2020 ◽  
Author(s):  
Floortje G. Bouwkamp ◽  
Floris P. de Lange ◽  
Eelke Spaak

AbstractThe human visual system can rapidly extract regularities from our visual environment, generating predictive context. It has been shown that spatial predictive context can be used during visual search. We set out to see whether observers can additionally exploit temporal predictive context, using an extended version of a contextual cueing paradigm. Though we replicated the contextual cueing effect, repeating search scenes in a structured order versus a random order yielded no additional behavioural benefit. This was true both for participants who were sensitive to spatial predictive context, and for those who were not. We argue that spatial predictive context during visual search is more readily learned and subsequently exploited than temporal predictive context, potentially rendering the latter redundant. In conclusion, unlike spatial context, temporal context is not automatically extracted and used during visual search.


Author(s):  
Angela A. Manginelli ◽  
Franziska Geringswald ◽  
Stefan Pollmann

When distractor configurations are repeated over time, visual search becomes more efficient, even if participants are unaware of the repetition. This contextual cueing is a form of incidental, implicit learning. One might therefore expect that contextual cueing does not (or only minimally) rely on working memory resources. This, however, is debated in the literature. We investigated contextual cueing under either a visuospatial or a nonspatial (color) visual working memory load. We found that contextual cueing was disrupted by the concurrent visuospatial, but not by the color working memory load. A control experiment ruled out that unspecific attentional factors of the dual-task situation disrupted contextual cueing. Visuospatial working memory may be needed to match current display items with long-term memory traces of previously learned displays.


Sign in / Sign up

Export Citation Format

Share Document