scholarly journals How do differences across visual features combine to determine visual search efficiency in parallel search?

2018 ◽  
Vol 18 (10) ◽  
pp. 282
Author(s):  
Alejandro Lleras ◽  
Jing Xu ◽  
Simona Buetti
2007 ◽  
Author(s):  
Elizabeth A. Krupinski ◽  
Hans Roehrig ◽  
Jiahua Fan

2008 ◽  
Vol 19 (2) ◽  
pp. 128-136 ◽  
Author(s):  
Geoffrey F. Woodman ◽  
Min-Suk Kang ◽  
Kirk Thompson ◽  
Jeffrey D. Schall

Perception ◽  
1996 ◽  
Vol 25 (7) ◽  
pp. 861-874 ◽  
Author(s):  
Rick Gurnsey ◽  
Frédéric J A M Poirier ◽  
Eric Gascon

Davis and Driver presented evidence suggesting that Kanizsa-type subjective contours could be detected in a visual search task in a time that is independent of the number of nonsubjective contour distractors. A linking connection was made between these psychophysical data and the physiological data of Peterhans and von der Heydt which showed that cells in primate area V2 respond to subjective contours in the same way that they respond to luminance-defined contours. Here in three experiments it is shown that there was sufficient information in the displays used by Davis and Driver to support parallel search independently of whether subjective contours were present or not. When confounding properties of the stimuli were eliminated search became slow whether or not subjective contours were present in the display. One of the slowest search conditions involved stimuli that were virtually identical to those used in the physiological studies of Peterhans and von der Heydt to which Davis and Driver wish to link their data. It is concluded that while subjective contours may be represented in the responses of very early visual mechanisms (eg in V2) access to these representations is impaired by high-contrast contours used to induce the subjective contours and nonsubjective figure distractors. This persistent control problem continues to confound attempts to show that Kanizsa-type subjective contours can be detected in parallel.


2013 ◽  
Vol 13 (9) ◽  
pp. 689-689 ◽  
Author(s):  
N. Siva ◽  
A. Chaparro ◽  
D. Nguyen ◽  
E. Palmer

2008 ◽  
Vol 14 (6) ◽  
pp. 990-1003 ◽  
Author(s):  
BRANDON KEEHN ◽  
LAURIE BRENNER ◽  
ERICA PALMER ◽  
ALAN J. LINCOLN ◽  
RALPH-AXEL MÜLLER

AbstractAlthough previous studies have shown that individuals with autism spectrum disorder (ASD) excel at visual search, underlying neural mechanisms remain unknown. This study investigated the neurofunctional correlates of visual search in children with ASD and matched typically developing (TD) children, using an event-related functional magnetic resonance imaging design. We used a visual search paradigm, manipulating search difficulty by varying set size (6, 12, or 24 items), distractor composition (heterogeneous or homogeneous) and target presence to identify brain regions associated with efficient and inefficient search. While the ASD group did not evidence accelerated response time (RT) compared with the TD group, they did demonstrate increased search efficiency, as measured by RT by set size slopes. Activation patterns also showed differences between ASD group, which recruited a network including frontal, parietal, and occipital cortices, and the TD group, which showed less extensive activation mostly limited to occipito-temporal regions. Direct comparisons (for both homogeneous and heterogeneous search conditions) revealed greater activation in occipital and frontoparietal regions in ASD than in TD participants. These results suggest that search efficiency in ASD may be related to enhanced discrimination (reflected in occipital activation) and increased top-down modulation of visual attention (associated with frontoparietal activation). (JINS, 2008, 14, 990–1003.)


Perception ◽  
1987 ◽  
Vol 16 (3) ◽  
pp. 389-398 ◽  
Author(s):  
Scott B Steinman

The nature of the processing of combinations of stimulus dimensions in human vision has recently been investigated. A study is reported in which visual search for suprathreshold positional information—vernier offsets, stereoscopic disparity, lateral separation, and orientation—was examined. The initial results showed that reaction times for visual search for conjunctions of stereoscopic disparity and either vernier offsets or orientation were independent of the number of distracting stimuli displayed, suggesting that disparity was searched in parallel with vernier offsets or orientation. Conversely, reaction times for detection of conjunctions of vernier offsets and orientation, or lateral separation and each of the other positional judgements, were related linearly to the number of distractors, suggesting serial search. However, practice has a significant effect upon the results, indicative of a shift in the mode of search from serial to parallel for all conjunctions tested as well as for single features. This suggests a reinter-pretation of these and perhaps other studies that use the Treisman visual search paradigm, in terms of perceptual segregation of the visual field by disparity, motion, color, and pattern features such as colinearity, orientation, lateral separation, or size.


Sign in / Sign up

Export Citation Format

Share Document