Dimension‐specific intertrial facilitation in visual search for pop‐out targets: Evidence for a top‐down modulable visual short‐term memory effect

2004 ◽  
Vol 11 (5) ◽  
pp. 577-602 ◽  
Author(s):  
Hermann Müller ◽  
Joseph Krummenacher ◽  
Dieter Heller
2010 ◽  
Vol 7 (9) ◽  
pp. 661-661
Author(s):  
N. Al-Aidroos ◽  
S. M. Emrich ◽  
J. Pratt ◽  
S. Ferber

2013 ◽  
Vol 110 (1) ◽  
pp. 12-18 ◽  
Author(s):  
A. Shimi ◽  
D. E. Astle

Despite our visual system receiving irrelevant input that competes with task-relevant signals, we are able to pursue our perceptual goals. Attention enhances our visual processing by biasing the processing of the input that is relevant to the task at hand. The top-down signals enabling these biases are therefore important for regulating lower level sensory mechanisms. In three experiments, we examined whether we apply similar biases to successfully maintain information in visual short-term memory (VSTM). We presented participants with targets alongside distracters and we graded their perceptual similarity to vary the extent to which they competed. Experiments 1 and 2 showed that the more items held in VSTM before the onset of the distracters, the more perceptually distinct the distracters needed to be for participants to retain the target accurately. Experiment 3 extended these behavioral findings by demonstrating that the perceptual similarity between target and distracters exerted a significantly greater effect on occipital alpha amplitudes, depending on the number of items already held in VSTM. The trade-off between VSTM load and target-distracter competition suggests that VSTM and perceptual competition share a partially overlapping mechanism, namely top-down inputs into sensory areas.


2012 ◽  
Vol 24 (1) ◽  
pp. 51-60 ◽  
Author(s):  
Bo-Cheng Kuo ◽  
Mark G. Stokes ◽  
Anna Christina Nobre

Recent studies have shown that selective attention is of considerable importance for encoding task-relevant items into visual short-term memory (VSTM) according to our behavioral goals. However, it is not known whether top–down attentional biases can continue to operate during the maintenance period of VSTM. We used ERPs to investigate this question across two experiments. Specifically, we tested whether orienting attention to a given spatial location within a VSTM representation resulted in modulation of the contralateral delay activity (CDA), a lateralized ERP marker of VSTM maintenance generated when participants selectively encode memory items from one hemifield. In both experiments, retrospective cues during the maintenance period could predict a specific item (spatial retrocue) or multiple items (neutral retrocue) that would be probed at the end of the memory delay. Our results revealed that VSTM performance is significantly improved by orienting attention to the location of a task-relevant item. The behavioral benefit was accompanied by modulation of neural activity involved in VSTM maintenance. Spatial retrocues reduced the magnitude of the CDA, consistent with a reduction in memory load. Our results provide direct evidence that top–down control modulates neural activity associated with maintenance in VSTM, biasing competition in favor of the task-relevant information.


2020 ◽  
Author(s):  
Alex Burmester

A common problem in vision research is explaining how humans perceive a coherent, detailed and stable world despite the fact that the eyes make constant,jumpy movements and the fact that only a small part of the visual field can beresolved in detail at any one time. This is essentially a problem of integrationover time - how successive views of the visual world can be used to create theimpression of a continuous and stable environment. A common way of studyingthis problem is to use complete visual scenes as stimuli and present a changedscene after a disruption such as an eye movement or a blank screen. It is found inthese studies that observers have great difficulty detecting changes made duringa disruption, even though these changes are immediately and easily detectablewhen the disruption is removed. These results have highlighted the importance ofmotion cues in tracking changes to the environment, but also reveal the limitednature of the internal representation. Change blindness studies are interestingas demonstrations but can be difficult to interpret as they are usually applied tocomplex, naturalistic scenes. More traditional studies of scene analysis, such asvisual search, are more abstract in their formulation, but offer more controlledstimulus conditions. In a typical visual search task, observers are presented withan array of objects against a uniform background and are required to report onthe presence or absence of a target object that is differentiable from the otherobjects in some way. More recently, scene analysis has been investigated bycombining change blindness and visual search in the ‘visual search for change’paradigm, in which observers must search for a target object defined by a changeover two presentations of the set of objects. The experiments of this thesis investigate change blindness using the visual search for change paradigm, but alsouse principles of design from psychophysical experiments, dealing with detectionand discrimination of basic visual qualities such as colour, speed, size, orientationand spatial frequency. This allows the experiments to precisely examine the roleof these different features in the change blindness process. More specifically, theexperiments are designed to look at the capacity of visual short-term memory fordifferent visual features, by examining the retention of this information acrossthe temporal gaps in the change blindness experiments. The nature and fidelityof representations in visual short-term memory is also investigated by manipulating (i) the manner in which featural information is distributed across space andobjects, (ii) the time for which the information is available, (iii) the manner inwhich observers must respond to that information. Results point to a model inwhich humans analyse objects in a scene at the level of features/attributes ratherthan at a pictorial/object level. Results also point to the fact that the working representations which humans retain during visual exploration are similarlyfeature- rather than object-based. In conclusion the thesis proposes a model ofscene analysis in which attention and vSTM capacity limits are used to explainthe results from a more information theoretic standpoint.


Sign in / Sign up

Export Citation Format

Share Document