contextual cueing
Recently Published Documents


TOTAL DOCUMENTS

222
(FIVE YEARS 55)

H-INDEX

24
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Hui Huang ◽  
Yangming Zhang ◽  
Sheng Li

Perceptual training of multiple tasks suffers from interference between the trained tasks. Here, we conducted four psychophysical experiments with separate groups of participants to investigate the possibility of preventing the interference in short-term perceptual training. We trained the participants to detect two orientations of Gabor stimuli in two adjacent days at the same retinal location and examined the interference of training effects between the two orientations. The results showed significant retroactive interference from the second orientation to the first orientation (Experiments 1 and 2). Introducing a 6-hour interval between the pre-test and training of the second orientation did not eliminate the interference effect, excluding the interpretation of disrupted reconsolidation as the pre-test of the second orientation may reactivate and destabilize the representation of the first orientation (Experiment 3). Finally, the training of the two orientations was accompanied by fixations in two colors, each served as a contextual cue for one orientation. The results showed that the retroactive interference was not evident after introducing these passively perceived contextual cues (Experiment 4). Our findings suggest that the retroactive interference effect in short-term perceptual training of orientation detection tasks was likely the result of higher-level factors such as shared contextual cues embedded in the tasks. The effect of multiple perceptual training could be facilitated by associating the trained tasks with different contextual cues.


2021 ◽  
Vol 21 (9) ◽  
pp. 1907
Author(s):  
Hanane Ramzaoui ◽  
Sarah Poulet ◽  
André Didierjean ◽  
Fabien Mathy

2021 ◽  
Vol 21 (9) ◽  
pp. 1975
Author(s):  
Lei Zheng ◽  
Jan-Gabriel Dobroschke ◽  
Stefan Pollmann
Keyword(s):  

2021 ◽  
Vol 21 (9) ◽  
pp. 2702
Author(s):  
Yibiao Liang ◽  
Zsuzsa Kaldy ◽  
Erik Blaser
Keyword(s):  

2021 ◽  
Vol 21 (9) ◽  
pp. 1831
Author(s):  
Sascha Meyen ◽  
Ulrike von Luxburg ◽  
Volker H. Franz

2021 ◽  
Vol 21 (10) ◽  
pp. 9
Author(s):  
Nils Bergmann ◽  
Anna Schubö

2021 ◽  
Vol 12 ◽  
Author(s):  
Lei Zheng ◽  
Jan-Gabriel Dobroschke ◽  
Stefan Pollmann

We investigated if contextual cueing can be guided by egocentric and allocentric reference frames. Combinations of search configurations and external frame orientations were learned during a training phase. In Experiment 1, either the frame orientation or the configuration was rotated, thereby disrupting either the allocentric or egocentric and allocentric predictions of the target location. Contextual cueing survived both of these manipulations, suggesting that it can overcome interference from both reference frames. In contrast, when changed orientations of the external frame became valid predictors of the target location in Experiment 2, we observed contextual cueing as long as one reference frame was predictive of the target location, but contextual cueing was eliminated when both reference frames were invalid. Thus, search guidance in repeated contexts can be supported by both egocentric and allocentric reference frames as long as they contain valid information about the search goal.


2021 ◽  
Vol 93 ◽  
pp. 103164
Author(s):  
Youcai Yang ◽  
Mariana V.C. Coutinho ◽  
Anthony J. Greene ◽  
Deborah E. Hannula
Keyword(s):  

2021 ◽  
Author(s):  
Sudhanshu Srivastava ◽  
William Wang ◽  
Miguel P. Eckstein

Human behavioral experiments have led to influential conceptualizations of visual attention, such as a serial processor or a limited resource spotlight. There is growing evidence that simpler organisms such as insects show behavioral signatures associated with human attention. Can those organisms learn such capabilities without conceptualizations of human attention? We show that a feedforward convolutional neural network (CNN) with a few million neurons trained on noisy images to detect targets learns to utilize predictive cues and context. We demonstrate that the CNN predicts human performance and gives rise to the three most prominent behavioral signatures of covert attention: Posner cueing, set-size effects in search, and contextual cueing. The CNN also approximates an ideal Bayesian observer that has all prior knowledge about the statistical properties of the noise, targets, cues, and context. The results help understand how even simple biological organisms show human-like visual attention by implementing neurobiologically plausible simple computations.


2021 ◽  
Vol 12 ◽  
Author(s):  
Xuelian Zang ◽  
Leonardo Assumpção ◽  
Jiao Wu ◽  
Xiaowei Xie ◽  
Artyom Zinchenko

In the contextual cueing task, visual search is faster for targets embedded in invariant displays compared to targets found in variant displays. However, it has been repeatedly shown that participants do not learn repeated contexts when these are irrelevant to the task. One potential explanation lays in the idea of associative blocking, where salient cues (task-relevant old items) block the learning of invariant associations in the task-irrelevant subset of items. An alternative explanation is that the associative blocking rather hinders the allocation of attention to task-irrelevant subsets, but not the learning per se. The current work examined these two explanations. In two experiments, participants performed a visual search task under a rapid presentation condition (300 ms) in Experiment 1, or under a longer presentation condition (2,500 ms) in Experiment 2. In both experiments, the search items within both old and new displays were presented in two colors which defined the irrelevant and task-relevant items within each display. The participants were asked to search for the target in the relevant subset in the learning phase. In the transfer phase, the instructions were reversed and task-irrelevant items became task-relevant (and vice versa). In line with previous studies, the search of task-irrelevant subsets resulted in no cueing effect post-transfer in the longer presentation condition; however, a reliable cueing effect was generated by task-irrelevant subsets learned under the rapid presentation. These results demonstrate that under rapid display presentation, global attentional selection leads to global context learning. However, under a longer display presentation, global attention is blocked, leading to the exclusive learning of invariant relevant items in the learning session.


Sign in / Sign up

Export Citation Format

Share Document