scholarly journals Object-based Implicit Learning in Visual Search: Perceptual Segmentation constrains Contextual Cueing

2012 ◽  
Vol 12 (9) ◽  
pp. 958-958
Author(s):  
M. Conci ◽  
H. J. Muller ◽  
A. von Muhlenen
2011 ◽  
Vol 19 (2) ◽  
pp. 203-233 ◽  
Author(s):  
Markus Conci ◽  
Adrian von Mühlenen

2001 ◽  
Vol 54 (4) ◽  
pp. 1105-1124 ◽  
Author(s):  
Yuhong Jiang ◽  
Marvin M. Chun

The effect of selective attention on implicit learning was tested in four experiments using the “contextual cueing” paradigm (Chun & Jiang, 1998, 1999). Observers performed visual search through items presented in an attended colour (e.g., red) and an ignored colour (e.g., green). When the spatial configuration of items in the attended colour was invariant and was consistently paired with a target location, visual search was facilitated, showing contextual cueing (Experiments 1, 3, and 4). In contrast, repeating and pairing the configuration of the ignored items with the target location resulted in no contextual cueing (Experiments 2 and 4). We conclude that implicit learning is robust only when relevant, predictive information is selectively attended.


2007 ◽  
Vol 60 (10) ◽  
pp. 1321-1328 ◽  
Author(s):  
Valeria Rausei ◽  
Tal Makovski ◽  
Yuhong V. Jiang

How much attention is needed to produce implicit learning? Previous studies have found inconsistent results, with some implicit learning tasks requiring virtually no attention while others rely on attention. In this study we examine the degree of attentional dependency in implicit learning of repeated visual search context. Observers searched for a target among distractors that were either highly similar to the target or dissimilar to the target. We found that the size of contextual cueing was comparable from repetition of the two types of distractors, even though attention dwelled much longer on distractors highly similar to the target. We suggest that beyond a minimal amount, further increase in attentional dwell time does not contribute significantly to implicit learning of repeated search context.


2019 ◽  
Author(s):  
Eelke Spaak ◽  
Floris P. de Lange

AbstractObservers rapidly and seemingly automatically learn to predict where to expect relevant items when those items are repeatedly presented in the same spatial context. This form of statistical learning in visual search has been studied extensively using a paradigm known as contextual cueing. The neural mechanisms underlying the learning and exploiting of such regularities remain unclear. We sought to elucidate these by examining behaviour and recording neural activity using magneto-encephalography (MEG) while observers were implicitly acquiring and exploiting statistical regularities. Computational modelling of behavioural data suggested that after repeated exposures to a spatial context, participants’ behaviour was marked by an abrupt switch to an exploitation strategy of the learnt regularities. MEG recordings showed that the initial learning phase was associated with larger hippocampal theta band activity for repeated scenes, while the subsequent exploitation phase showed larger prefrontal theta band activity for these repeated scenes. Strikingly, the behavioural benefit of repeated exposures to certain scenes was inversely related to explicit awareness of such repeats, demonstrating the implicit nature of the expectations acquired. This elucidates how theta activity in the hippocampus and prefrontal cortex underpins the implicit learning and exploitation of spatial statistical regularities to optimize visual search behaviour.


Author(s):  
Angela A. Manginelli ◽  
Franziska Geringswald ◽  
Stefan Pollmann

When distractor configurations are repeated over time, visual search becomes more efficient, even if participants are unaware of the repetition. This contextual cueing is a form of incidental, implicit learning. One might therefore expect that contextual cueing does not (or only minimally) rely on working memory resources. This, however, is debated in the literature. We investigated contextual cueing under either a visuospatial or a nonspatial (color) visual working memory load. We found that contextual cueing was disrupted by the concurrent visuospatial, but not by the color working memory load. A control experiment ruled out that unspecific attentional factors of the dual-task situation disrupted contextual cueing. Visuospatial working memory may be needed to match current display items with long-term memory traces of previously learned displays.


NeuroImage ◽  
2016 ◽  
Vol 124 ◽  
pp. 887-897 ◽  
Author(s):  
Stefan Pollmann ◽  
Jana Eštočinová ◽  
Susanne Sommer ◽  
Leonardo Chelazzi ◽  
Wolf Zinke

2012 ◽  
Vol 12 (9) ◽  
pp. 1156-1156
Author(s):  
A. Greenberg ◽  
M. Rosen ◽  
K. Zamora ◽  
E. Cutrone ◽  
M. Behrmann
Keyword(s):  

2020 ◽  
Vol 28 (9) ◽  
pp. 470-483
Author(s):  
Chao Wang ◽  
Shree Venkateshan ◽  
Bruce Milliken ◽  
Hong-jin Sun

Sign in / Sign up

Export Citation Format

Share Document