Supplemental Material for The Face in the Crowd Effect Unconfounded: Happy Faces, Not Angry Faces, Are More Efficiently Detected in Single- and Multiple-Target Visual Search Tasks

2011 ◽  
Vol 140 (4) ◽  
pp. 637-659 ◽  
Author(s):  
D. Vaughn Becker ◽  
Uriah S. Anderson ◽  
Chad R. Mortensen ◽  
Samantha L. Neufeld ◽  
Rebecca Neel

Author(s):  
Viktoria R. Mileva ◽  
Peter J. B. Hancock ◽  
Stephen R. H. Langton

AbstractFinding an unfamiliar person in a crowd of others is an integral task for police officers, CCTV-operators, and security staff who may be looking for a suspect or missing person; however, research suggests that it is difficult and accuracy in such tasks is low. In two real-world visual-search experiments, we examined whether being provided with four images versus one image of an unfamiliar target person would help improve accuracy when searching for that person through video footage. In Experiment 1, videos were taken from above and at a distance to simulate CCTV, and images of the target showed their face and torso. In Experiment 2, videos were taken from approximately shoulder height, such as one would expect from body-camera or mobile phone recordings, and target images included only the face. Our findings suggest that having four images as exemplars leads to higher accuracy in the visual search tasks, but this only reached significance in Experiment 2. There also appears to be a conservative bias whereby participants are more likely to respond that the target is not in the video when presented with only one image as opposed to 4. These results point to there being an advantage for providing multiple images of targets for use in video visual-search.


2012 ◽  
Author(s):  
Stephen R. Mitroff ◽  
Adam T. Biggs ◽  
Matthew S. Cain ◽  
Elise F. Darling ◽  
Kait Clark ◽  
...  

1998 ◽  
Vol 6 (3) ◽  
pp. 227-232 ◽  
Author(s):  
Philippe Stivalet ◽  
Yvan Moreno ◽  
Joëlle Richard ◽  
Pierre-Alain Barraud ◽  
Christian Raphel
Keyword(s):  

Perception ◽  
1992 ◽  
Vol 21 (4) ◽  
pp. 465-480 ◽  
Author(s):  
Jeremy M Wolfe ◽  
Alice Yee ◽  
Stacia R Friedman-Hill

2021 ◽  
Author(s):  
Heida Maria Sigurdardottir ◽  
Hilma Ros Omarsdóttir ◽  
Anna Sigridur Valgeirsdottir

Attention has been hypothesized to act as a sequential gating mechanism for the orderly processing of letters in words. These same visuo-attentional processes are assumed to partake in some but not all visual search tasks. In the current study, 60 adults with varying degrees of reading abilities, ranging from expert readers to severely impaired dyslexic readers, completed an attentionally demanding visual conjunction search task thought to heavily rely on the dorsal visual stream. A visual feature search task served as an internal control. According to the dorsal view of dyslexia, reading problems should go hand in hand with specific problems in visual conjunction search – particularly elevated conjunction search slopes (time per search item) – which would be interpreted as a problem with visual attention. Results showed that reading problems were associated with slower visual search, especially conjunction search. However, problems with reading were not associated with increased conjunction search slopes but instead with increased conjunction search intercepts, traditionally not interpreted as reflecting attentional processes. Our data are hard to reconcile with hypothesized problems in dyslexia with the serial moving of an attentional spotlight across a visual scene or a page of text.


2018 ◽  
Author(s):  
Alasdair D F Clarke ◽  
Jessica Irons ◽  
Warren James ◽  
Andrew B. Leber ◽  
Amelia R. Hunt

A striking range of individual differences has recently been reported in three different visual search tasks. These differences in performance can be attributed to strategy, that is, the efficiency with which participants control their search to complete the task quickly and accurately. Here we ask if an individual's strategy and performance in one search task is correlated with how they perform in the other two. We tested 64 observers in the three tasks mentioned above over two sessions. Even though the test-retest reliability of the tasks is high, an observer's performance and strategy in one task did not reliably predict their behaviour in the other two. These results suggest search strategies are stable over time, but context-specific. To understand visual search we therefore need to account not only for differences between individuals, but also how individuals interact with the search task and context. These context-specific but stable individual differences in strategy can account for a substantial proportion of variability in search performance.


Sign in / Sign up

Export Citation Format

Share Document