Faculty Opinions recommendation of A summary statistic representation in peripheral vision explains visual search.

Author(s):  
Aapo Hyvärinen ◽  
Michael Gutmann
2012 ◽  
Vol 12 (4) ◽  
pp. 14-14 ◽  
Author(s):  
R. Rosenholtz ◽  
J. Huang ◽  
A. Raj ◽  
B. J. Balas ◽  
L. Ilie

2009 ◽  
Vol 9 (12) ◽  
pp. 13-13 ◽  
Author(s):  
B. Balas ◽  
L. Nakano ◽  
R. Rosenholtz

Author(s):  
William D. Shontz ◽  
Gerald A. Trumm ◽  
Leon G. Williams

Visual search performance was investigated as a function of color-coded and uncoded information location, number of categories coded, number of objects per category, and background clutter. Thirty-three subjects searched 12 areas of modified sectional aeronautical charts for a total of 48 checkpoints. Identification of checkpoints was established with labels plus geographical context information. Color served as a partially redundant code for information location. In general, the findings indicate that color coding for information location is most effective when: (1) many categories of information can or must be coded, (2) colors highly discriminable in peripheral vision are used, and (3) the number of objects per category is kept reasonably small.


2020 ◽  
Vol 20 (8) ◽  
pp. 20
Author(s):  
Ömer Daglar Tanrikulu ◽  
Andrey Chetverikov ◽  
Árni Kristjánsson

Perception ◽  
1997 ◽  
Vol 26 (12) ◽  
pp. 1555-1570 ◽  
Author(s):  
Valerie Brown ◽  
Dale Huey ◽  
John M Findlay

We examined whether faces can produce a ‘pop-out’ effect in visual search tasks. In the first experiment, subjects' eye movements and search latencies were measured while they viewed a display containing a target face amidst distractors. Targets were upright or inverted faces presented with seven others of the opposite polarity as an ‘around-the-clock’ display. Face images were either photographic or ‘feature only’, with the outline removed. Naive subjects were poor at locating an upright face from an array of inverted faces, but performance improved with practice. In the second experiment, we investigated systematically how training improved performance. Prior to testing, subjects were practised on locating either upright or inverted faces. All subjects benefited from training. Subjects practised on upright faces were faster and more accurate at locating upright target faces than inverted. Subjects practised on inverted faces showed no difference between upright and inverted targets. In the third experiment, faces with ‘jumbled’ features were used as distractors, and this resulted in the same pattern of findings. We conclude that there is no direct rapid ‘pop-out’ effect for faces. However, the findings demonstrate that, in peripheral vision, upright faces show a processing advantage over inverted faces.


2014 ◽  
Vol 14 (10) ◽  
pp. 935-935
Author(s):  
R. Dubey ◽  
C. S. Soon ◽  
P.-J. Hsieh

2020 ◽  
Author(s):  
Yanfang Xia ◽  
Filip Melinscak ◽  
Dominik R Bach

Threat-conditioned cues are thought to capture overt attention in a bottom-up process. Quantification of this phenomenon typically relies on cue competition paradigms. Here, we sought to exploit gaze patterns during exclusive presentation of a visual conditioned stimulus, in order to quantify human threat conditioning. To this end, we capitalised on a summary statistic of visual search during CS presentation, scanpath length. During a simple delayed threat conditioning paradigm with full-screen monochrome conditioned stimuli (CS), we observed shorter scanpath length during CS+ compared to CS- presentation. Retrodictive validity, i.e. effect size to distinguish CS+ and CS-, was maximised by considering a 2-s time window before US onset. Taking into account the shape of the scan speed response resulted in similar retrodictive validity. The mechanism underlying shorter scanpath length appeared to be longer fixation duration and more fixation on the screen center during CS+ relative to CS- presentation. These findings were replicated in a second experiment with similar setup, and further confirmed in a third experiment using full-screen patterns as CS. This experiment included an extinction session during which scanpath differences appeared to extinguish. In a fourth experiment with auditory CS and instruction to fixate screen centre, no scanpath length differences were observed. In conclusion, our study suggests scanpath length as a visual search summary statistic, which may be used as complementary measure to quantify threat conditioning with retrodictive validity similar to that of skin conductance responses.


2020 ◽  
Author(s):  
Anne-Sophie Laurin ◽  
Julie Ouerfelli-Éthier ◽  
Laure Pisella ◽  
Aarlenne Zein Khan

Older adults show declines performing visual search, but their nature is unclear. We propose that it is related to greater attentional reliance on central vision. To investigate this, we tested how occluding central vision would affect younger and older adults in visual search. Participants (14 younger, M = 21.6 years; 16 older, M = 69.6 years) performed pop-out and serial search tasks in full view and with different sized gaze-contingent artificial central scotomas (no scotoma, 3°, 5° or 7° diameter).In pop-out search, older adults showed longer search times for peripheral targets during full viewing. Their reaction times, saccades and fixation durations also increased as a function of scotoma size, contrary to younger adults. These declines may reflect a relative impairment in peripheral visual attention for global processing in aging.In serial search, despite older adults being generally slower, we found no difference between groups in reaction time increases for eccentric targets and for bigger scotomas. These results may come from the difficulty of serial search, in which both groups used centrally limited attentional windows.We conclude that older adults allocate more attentional resources towards central vision compared to younger adults, impairing their peripheral processing primarily in pop-out visual search.


2019 ◽  
Vol 3 (2) ◽  
pp. 26 ◽  
Author(s):  
Damien Camors ◽  
Damien Appert ◽  
Jean-Baptiste Durand ◽  
Christophe Jouffrais

The loss of peripheral vision is experienced by millions of people with glaucoma or retinitis pigmentosa, and has a major impact in everyday life, specifically to locate visual targets in the environment. In this study, we designed a wearable interface to render the location of specific targets with private and non-intrusive tactile cues. Three experimental studies were completed to design and evaluate the tactile code and the device. In the first study, four different tactile codes (single stimuli or trains of pulses rendered either in a Cartesian or a Polar coordinate system) were evaluated with a head pointing task. In the following studies, the most efficient code, trains of pulses with Cartesian coordinates, was used on a bracelet located on the wrist, and evaluated during a visual search task in a complex virtual environment. The second study included ten subjects with a simulated restrictive field of view (10°). The last study consisted of proof of a concept with one visually impaired subject with restricted peripheral vision due to glaucoma. The results show that the device significantly improved the visual search efficiency with a factor of three. Including object recognition algorithm to smart glass, the device could help to detect targets of interest either on demand or suggested by the device itself (e.g., potential obstacles), facilitating visual search, and more generally spatial awareness of the environment.


Sign in / Sign up

Export Citation Format

Share Document