Searching for Inefficiency in Visual Search

2015 ◽  
Vol 27 (1) ◽  
pp. 46-56 ◽  
Author(s):  
Gregory J. Christie ◽  
Ashley C. Livingstone ◽  
John J. McDonald

The time required to find an object of interest in the visual field often increases as a function of the number of items present. This increase or inefficiency was originally interpreted as evidence for the serial allocation of attention to potential target items, but controversy has ensued for decades. We investigated this issue by recording ERPs from humans searching for a target in displays containing several differently colored items. Search inefficiency was ascribed not to serial search but to the time required to selectively process the target once found. Additionally, less time was required for the target to “pop out” from the rest of the display when the color of the target repeated across trials. These findings indicate that task relevance can cause otherwise inconspicuous items to pop out and highlight the need for direct neurophysiological measures when investigating the causes of search inefficiency.

Perception ◽  
1987 ◽  
Vol 16 (3) ◽  
pp. 389-398 ◽  
Author(s):  
Scott B Steinman

The nature of the processing of combinations of stimulus dimensions in human vision has recently been investigated. A study is reported in which visual search for suprathreshold positional information—vernier offsets, stereoscopic disparity, lateral separation, and orientation—was examined. The initial results showed that reaction times for visual search for conjunctions of stereoscopic disparity and either vernier offsets or orientation were independent of the number of distracting stimuli displayed, suggesting that disparity was searched in parallel with vernier offsets or orientation. Conversely, reaction times for detection of conjunctions of vernier offsets and orientation, or lateral separation and each of the other positional judgements, were related linearly to the number of distractors, suggesting serial search. However, practice has a significant effect upon the results, indicative of a shift in the mode of search from serial to parallel for all conjunctions tested as well as for single features. This suggests a reinter-pretation of these and perhaps other studies that use the Treisman visual search paradigm, in terms of perceptual segregation of the visual field by disparity, motion, color, and pattern features such as colinearity, orientation, lateral separation, or size.


Vision ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 13
Author(s):  
Christian Valuch

Color can enhance the perception of relevant stimuli by increasing their salience and guiding visual search towards stimuli that match a task-relevant color. Using Continuous Flash Suppression (CFS), the current study investigated whether color facilitates the discrimination of targets that are difficult to perceive due to interocular suppression. Gabor patterns of two or four cycles per degree (cpd) were shown as targets to the non-dominant eye of human participants. CFS masks were presented at a rate of 10 Hz to the dominant eye, and participants had the task to report the target’s orientation as soon as they could discriminate it. The 2-cpd targets were robustly suppressed and resulted in much longer response times compared to 4-cpd targets. Moreover, only for 2-cpd targets, two color-related effects were evident. First, in trials where targets and CFS masks had different colors, targets were reported faster than in trials where targets and CFS masks had the same color. Second, targets with a known color, either cyan or yellow, were reported earlier than targets whose color was randomly cyan or yellow. The results suggest that the targets’ entry to consciousness may have been speeded by color-mediated effects relating to increased (bottom-up) salience and (top-down) task relevance.


Psychology ◽  
2015 ◽  
Vol 06 (14) ◽  
pp. 1873-1878 ◽  
Author(s):  
Ryotaro Saito ◽  
Yoshifumi Ikeda ◽  
Hideyuki Okuzumi ◽  
Iwao Kobayashi ◽  
Mitsuru Kokubun

2010 ◽  
Vol 9 (8) ◽  
pp. 1210-1210
Author(s):  
L. McIlreavy ◽  
J. Fiser ◽  
P. Bex

Author(s):  
Thomas Z. Strybel ◽  
Jan M. Boucher ◽  
Greg E. Fujawa ◽  
Craig S. Volp

The effectiveness of auditory spatial cues in visual search performance was examined in three experiments. Auditory spatial cues are more effective than abrupt visual onsets when the target appears in the peripheral visual field or when the contrast of the target is degraded. The duration of the auditory spatial cue did not affect search performance.


2012 ◽  
Vol 3 ◽  
Author(s):  
Emily Wiecek ◽  
Louis R. Pasquale ◽  
Jozsef Fiser ◽  
Steven Dakin ◽  
Peter J. Bex

2009 ◽  
Vol 49 (2) ◽  
pp. 237-248 ◽  
Author(s):  
Tobias Pflugshaupt ◽  
Roman von Wartburg ◽  
Pascal Wurtz ◽  
Silvia Chaves ◽  
Anouk Déruaz ◽  
...  

Author(s):  
Dorothy M. Johnston

This study was made to investigate the relationship between the size of visual fields of observers and time required to locate targets on static displays. The findings, which indicate that people with large visual fields can find targets more rapidly than observers with small fields, have practical selection and training application. Equations are presented which can be used to determine search time that can be expected as a function of the size of the visual field of the observer and the apparent size of the area being searched.


Perception ◽  
1980 ◽  
Vol 9 (4) ◽  
pp. 451-455 ◽  
Author(s):  
Naoyuki Osaka

Twenty observers in each of the age groups, three, four, five, and twenty-one years, were asked to identify pictures displayed through five different sizes of peephole. Recognition latency changed as a cube-root power function of aperture area. It was found that latency decreased as age and area increased. However, the exponent of the power function showed little age-related change. Effectiveness of the peripheral visual field size was discussed in terms of magnitude of the exponent.


2011 ◽  
Vol 23 (9) ◽  
pp. 2231-2239 ◽  
Author(s):  
Carsten N. Boehler ◽  
Mircea A. Schoenfeld ◽  
Hans-Jochen Heinze ◽  
Jens-Max Hopf

Attention to one feature of an object can bias the processing of unattended features of that object. Here we demonstrate with ERPs in visual search that this object-based bias for an irrelevant feature also appears in an unattended object when it shares that feature with the target object. Specifically, we show that the ERP response elicited by a distractor object in one visual field is modulated as a function of whether a task-irrelevant color of that distractor is also present in the target object that is presented in the opposite visual field. Importantly, we find this modulation to arise with a delay of approximately 80 msec relative to the N2pc—a component of the ERP response that reflects the focusing of attention onto the target. In a second experiment, we demonstrate that this modulation reflects enhanced neural processing in the unattended object. These observations together facilitate the surprising conclusion that the object-based selection of irrelevant features is spatially global even after attention has selected the target object.


Sign in / Sign up

Export Citation Format

Share Document