Transfer of Spatial Context from Visual to Haptic Search

Perception ◽  
10.1068/p5135 ◽  
2003 ◽  
Vol 32 (11) ◽  
pp. 1351-1358 ◽  
Author(s):  
Tomohiro Nabeta ◽  
Fuminori Ono ◽  
Jun-Ichiro Kawahara

Under incidental learning conditions, spatial layouts can be acquired implicitly and facilitate visual search (contextual-cueing effect). We examined whether the contextual-cueing effect is specific to the visual modality or transfers to the haptic modality. The participants performed 320 (experiment 1) or 192 (experiment 2) visual search trials based on a typical contextual-cueing paradigm, followed by haptic search trials in which half of the trials had layouts used in the previous visual search trials. The visual contextual-cueing effect was obtained in the learning phase. More importantly, the effect was transferred from visual to haptic searches; there was greater facilitation of haptic search trials when the spatial layout was the same as in the previous visual search trials, compared with trials in which the spatial layout differed from those in the visual search. This suggests the commonality of spatial memory to allocate focused attention in both visual and haptic modalities.

2021 ◽  
Vol 12 ◽  
Author(s):  
Xuelian Zang ◽  
Leonardo Assumpção ◽  
Jiao Wu ◽  
Xiaowei Xie ◽  
Artyom Zinchenko

In the contextual cueing task, visual search is faster for targets embedded in invariant displays compared to targets found in variant displays. However, it has been repeatedly shown that participants do not learn repeated contexts when these are irrelevant to the task. One potential explanation lays in the idea of associative blocking, where salient cues (task-relevant old items) block the learning of invariant associations in the task-irrelevant subset of items. An alternative explanation is that the associative blocking rather hinders the allocation of attention to task-irrelevant subsets, but not the learning per se. The current work examined these two explanations. In two experiments, participants performed a visual search task under a rapid presentation condition (300 ms) in Experiment 1, or under a longer presentation condition (2,500 ms) in Experiment 2. In both experiments, the search items within both old and new displays were presented in two colors which defined the irrelevant and task-relevant items within each display. The participants were asked to search for the target in the relevant subset in the learning phase. In the transfer phase, the instructions were reversed and task-irrelevant items became task-relevant (and vice versa). In line with previous studies, the search of task-irrelevant subsets resulted in no cueing effect post-transfer in the longer presentation condition; however, a reliable cueing effect was generated by task-irrelevant subsets learned under the rapid presentation. These results demonstrate that under rapid display presentation, global attentional selection leads to global context learning. However, under a longer display presentation, global attention is blocked, leading to the exclusive learning of invariant relevant items in the learning session.


2017 ◽  
Vol 8 ◽  
Author(s):  
Guang Zhao ◽  
Qian Zhuang ◽  
Jie Ma ◽  
Shen Tu ◽  
Qiang Liu ◽  
...  

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Thomas Geyer ◽  
Pardis Rostami ◽  
Lisa Sogerer ◽  
Bernhard Schlagbauer ◽  
Hermann J. Müller

Abstract Visual search is facilitated when observers encounter targets in repeated display arrangements. This ‘contextual-cueing’ (CC) effect is attributed to incidental learning of spatial distractor-target relations. Prior work has typically used only one recognition measure (administered after the search task) to establish whether CC is based on implicit or explicit memory of repeated displays, with the outcome depending on the diagnostic accuracy of the test. The present study compared two explicit memory tests to tackle this issue: yes/no recognition of a given search display as repeated versus generation of the quadrant in which the target (which was replaced by a distractor) had been located during the search task, thus closely matching the processes involved in performing the search. While repeated displays elicited a CC effect in the search task, both tests revealed above-chance knowledge of repeated displays, though explicit-memory accuracy and its correlation with contextual facilitation in the search task were more pronounced for the generation task. These findings argue in favor of a one-system, explicit-memory account of CC. Further, they demonstrate the superiority of the generation task for revealing the explicitness of CC, likely because both the search and the memory task involve overlapping processes (in line with ‘transfer-appropriate processing’).


2020 ◽  
Author(s):  
Floortje G. Bouwkamp ◽  
Floris P. de Lange ◽  
Eelke Spaak

AbstractThe human visual system can rapidly extract regularities from our visual environment, generating predictive context. It has been shown that spatial predictive context can be used during visual search. We set out to see whether observers can additionally exploit temporal predictive context, using an extended version of a contextual cueing paradigm. Though we replicated the contextual cueing effect, repeating search scenes in a structured order versus a random order yielded no additional behavioural benefit. This was true both for participants who were sensitive to spatial predictive context, and for those who were not. We argue that spatial predictive context during visual search is more readily learned and subsequently exploited than temporal predictive context, potentially rendering the latter redundant. In conclusion, unlike spatial context, temporal context is not automatically extracted and used during visual search.


Author(s):  
Angela A. Manginelli ◽  
Franziska Geringswald ◽  
Stefan Pollmann

When distractor configurations are repeated over time, visual search becomes more efficient, even if participants are unaware of the repetition. This contextual cueing is a form of incidental, implicit learning. One might therefore expect that contextual cueing does not (or only minimally) rely on working memory resources. This, however, is debated in the literature. We investigated contextual cueing under either a visuospatial or a nonspatial (color) visual working memory load. We found that contextual cueing was disrupted by the concurrent visuospatial, but not by the color working memory load. A control experiment ruled out that unspecific attentional factors of the dual-task situation disrupted contextual cueing. Visuospatial working memory may be needed to match current display items with long-term memory traces of previously learned displays.


NeuroImage ◽  
2016 ◽  
Vol 124 ◽  
pp. 887-897 ◽  
Author(s):  
Stefan Pollmann ◽  
Jana Eštočinová ◽  
Susanne Sommer ◽  
Leonardo Chelazzi ◽  
Wolf Zinke

2011 ◽  
Vol 19 (2) ◽  
pp. 203-233 ◽  
Author(s):  
Markus Conci ◽  
Adrian von Mühlenen

Sign in / Sign up

Export Citation Format

Share Document