scholarly journals Stable individual differences in strategies within, but not between, visual search tasks

Author(s):  
Alasdair D F Clarke ◽  
Jessica Irons ◽  
Warren James ◽  
Andrew B. Leber ◽  
Amelia R. Hunt

A striking range of individual differences has recently been reported in three different visual search tasks. These differences in performance can be attributed to strategy, that is, the efficiency with which participants control their search to complete the task quickly and accurately. Here we ask if an individual's strategy and performance in one search task is correlated with how they perform in the other two. We tested 64 observers in the three tasks mentioned above over two sessions. Even though the test-retest reliability of the tasks is high, an observer's performance and strategy in one task did not reliably predict their behaviour in the other two. These results suggest search strategies are stable over time, but context-specific. To understand visual search we therefore need to account not only for differences between individuals, but also how individuals interact with the search task and context. These context-specific but stable individual differences in strategy can account for a substantial proportion of variability in search performance.

2020 ◽  
pp. 174702182092919 ◽  
Author(s):  
Alasdair DF Clarke ◽  
Jessica L Irons ◽  
Warren James ◽  
Andrew B Leber ◽  
Amelia R Hunt

A striking range of individual differences has recently been reported in three different visual search tasks. These differences in performance can be attributed to strategy, that is, the efficiency with which participants control their search to complete the task quickly and accurately. Here, we ask whether an individual’s strategy and performance in one search task is correlated with how they perform in the other two. We tested 64 observers and found that even though the test–retest reliability of the tasks was high, an observer’s performance and strategy in one task was not predictive of their behaviour in the other two. These results suggest search strategies are stable over time, but context-specific. To understand visual search, we therefore need to account not only for differences between individuals but also how individuals interact with the search task and context.


1978 ◽  
Vol 22 (1) ◽  
pp. 299-302 ◽  
Author(s):  
William H. Cushman

Nine subjects performed a visual search task for two 100-minute sessions using microfiche with positive appearing images and small, portable microfiche readers. During one session the subjects performed the task with a reader having a screen with highly visible scintillation. During the other they used a reader equipped with a screen that was nearly free from scintillation. Dependent variables were subjective visual fatigue, general fatigue, and number of targets located. Subjects reported significantly greater visual fatigue after viewing the “high” scintillation screen for 50–100 minutes than after viewing the “low” scintillation screen for the same length of time. When the high-scintillation screen was used, the subjects also reported an increase in general fatigue. Screen scintillation did not affect the subjects' performance on the search task, however.


1981 ◽  
Vol 53 (2) ◽  
pp. 411-418
Author(s):  
Lance A. Portnoff ◽  
Jerome A. Yesavage ◽  
Mary B. Acker

Disturbances in attention are among the most frequent cognitive abnormalities in schizophrenia. Recent research has suggested that some schizophrenics have difficulty with visual tracking, which is suggestive of attentional deficits. To investigate differential visual-search performance by schizophrenics, 15 chronic undifferentiated and 15 paranoid schizophrenics were compared with 15 normals on two tests measuring visual search in a systematic and an unsystematic stimulus mode. Chronic schizophrenics showed difficulty with both kinds of visual-search tasks. In contrast, paranoids had only a deficit in the systematic visual-search task. Their ability for visual search in an unsystematized stimulus array was equivalent to that of normals. Although replication and cross-validation is needed to confirm these findings, it appears that the two tests of visual search may provide a useful ancillary method for differential diagnosis between these two types of schizophrenia.


2021 ◽  
Author(s):  
Thomas L. Botch ◽  
Brenda D. Garcia ◽  
Yeo Bi Choi ◽  
Caroline E. Robertson

Visual search is a universal human activity in naturalistic environments. Traditionally, visual search is investigated under tightly controlled conditions, where head-restricted participants locate a minimalistic target in a cluttered array presented on a computer screen. Do classic findings of visual search extend to naturalistic settings, where participants actively explore complex, real-world scenes? Here, we leverage advances in virtual reality (VR) technology to relate individual differences in classic visual search paradigms to naturalistic search behavior. In a naturalistic visual search task, participants looked for an object within their environment via a combination of head-turns and eye-movements using a head-mounted display. Then, in a classic visual search task, participants searched for a target within a simple array of colored letters using only eye-movements. We tested how set size, a property known to limit visual search within computer displays, predicts the efficiency of search behavior inside immersive, real-world scenes that vary in levels of visual clutter. We found that participants' search performance was impacted by the level of visual clutter within real-world scenes. Critically, we also observed that individual differences in visual search efficiency in classic search predicted efficiency in real-world search, but only when the comparison was limited to the forward-facing field of view for real-world search. These results demonstrate that set size is a reliable predictor of individual performance across computer-based and active, real-world visual search behavior.


2021 ◽  
Vol 11 (3) ◽  
pp. 283
Author(s):  
Olga Lukashova-Sanz ◽  
Siegfried Wahl

Visual search becomes challenging when the time to find the target is limited. Here we focus on how performance in visual search can be improved via a subtle saliency-aware modulation of the scene. Specifically, we investigate whether blurring salient regions of the scene can improve participant’s ability to find the target faster when the target is located in non-salient areas. A set of real-world omnidirectional images were displayed in virtual reality with a search target overlaid on the visual scene at a pseudorandom location. Participants performed a visual search task in three conditions defined by blur strength, where the task was to find the target as fast as possible. The mean search time, and the proportion of trials where participants failed to find the target, were compared across different conditions. Furthermore, the number and duration of fixations were evaluated. A significant effect of blur on behavioral and fixation metrics was found using linear mixed models. This study shows that it is possible to improve the performance by a saliency-aware subtle scene modulation in a challenging realistic visual search scenario. The current work provides an insight into potential visual augmentation designs aiming to improve user’s performance in everyday visual search tasks.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jane W. Couperus ◽  
Kirsten O. Lydic ◽  
Juniper E. Hollis ◽  
Jessica L. Roy ◽  
Amy R. Lowe ◽  
...  

The lateralized ERP N2pc component has been shown to be an effective marker of attentional object selection when elicited in a visual search task, specifically reflecting the selection of a target item among distractors. Moreover, when targets are known in advance, the visual search process is guided by representations of target features held in working memory at the time of search, thus guiding attention to objects with target-matching features. Previous studies have shown that manipulating working memory availability via concurrent tasks or within task manipulations influences visual search performance and the N2pc. Other studies have indicated that visual (non-spatial) vs. spatial working memory manipulations have differential contributions to visual search. To investigate this the current study assesses participants' visual and spatial working memory ability independent of the visual search task to determine whether such individual differences in working memory affect task performance and the N2pc. Participants (n = 205) completed a visual search task to elicit the N2pc and separate visual working memory (VWM) and spatial working memory (SPWM) assessments. Greater SPWM, but not VWM, ability is correlated with and predicts higher visual search accuracy and greater N2pc amplitudes. Neither VWM nor SPWM was related to N2pc latency. These results provide additional support to prior behavioral and neural visual search findings that spatial WM availability, whether as an ability of the participant's processing system or based on task demands, plays an important role in efficient visual search.


2021 ◽  
Author(s):  
Heida Maria Sigurdardottir ◽  
Hilma Ros Omarsdóttir ◽  
Anna Sigridur Valgeirsdottir

Attention has been hypothesized to act as a sequential gating mechanism for the orderly processing of letters in words. These same visuo-attentional processes are assumed to partake in some but not all visual search tasks. In the current study, 60 adults with varying degrees of reading abilities, ranging from expert readers to severely impaired dyslexic readers, completed an attentionally demanding visual conjunction search task thought to heavily rely on the dorsal visual stream. A visual feature search task served as an internal control. According to the dorsal view of dyslexia, reading problems should go hand in hand with specific problems in visual conjunction search – particularly elevated conjunction search slopes (time per search item) – which would be interpreted as a problem with visual attention. Results showed that reading problems were associated with slower visual search, especially conjunction search. However, problems with reading were not associated with increased conjunction search slopes but instead with increased conjunction search intercepts, traditionally not interpreted as reflecting attentional processes. Our data are hard to reconcile with hypothesized problems in dyslexia with the serial moving of an attentional spotlight across a visual scene or a page of text.


2021 ◽  
Vol 2 ◽  
Author(s):  
Zekun Cao ◽  
Jeronimo Grandi ◽  
Regis Kopper

Dynamic field of view (FOV) restrictors have been successfully used to reduce visually induced motion sickness (VIMS) during continuous viewpoint motion control (virtual travel) in virtual reality (VR). This benefit, however, comes at the cost of losing peripheral awareness during provocative motion. Likewise, the use of visual references that are stable in relation to the physical environment, called rest frames (RFs), has also been shown to reduce discomfort during virtual travel tasks in VR. We propose a new RF-based design called Granulated Rest Frames (GRFs) with a soft-edged circular cutout in the center that leverages the rest frames’ benefits without completely blocking the user’s peripheral view. The GRF design is application-agnostic and does not rely on context-specific RFs, such as commonly used cockpits. We report on a within-subjects experiment with 20 participants. The results suggest that, by strategically applying GRFs during a visual search session in VR, we can achieve better item searching efficiency as compared to restricted FOV. The effect of GRFs on reducing VIMS remains to be determined by future work.


Author(s):  
Thomas Z. Strybel ◽  
Jan M. Boucher ◽  
Greg E. Fujawa ◽  
Craig S. Volp

The effectiveness of auditory spatial cues in visual search performance was examined in three experiments. Auditory spatial cues are more effective than abrupt visual onsets when the target appears in the peripheral visual field or when the contrast of the target is degraded. The duration of the auditory spatial cue did not affect search performance.


2012 ◽  
Vol 25 (0) ◽  
pp. 158
Author(s):  
Pawel J. Matusz ◽  
Martin Eimer

We investigated whether top-down attentional control settings can specify task-relevant features in different sensory modalities (vision and audition). Two audiovisual search tasks were used where a spatially uninformative visual singleton cue preceded a target search array. In different blocks, participants searched for a visual target (defined by colour or shape in Experiments 1 and 2, respectively), or target defined by a combination of visual and auditory features (e.g., red target accompanied by a high-pitch tone). Spatial cueing effects indicative of attentional capture by target-matching visual singleton cues in the unimodal visual search task were reduced or completely eliminated when targets were audiovisually defined. The N2pc component (i.e. index attentional target selection in vision) triggered by these cues was reduced and delayed during search for audiovisual as compared to unimodal visual targets. These results provide novel evidence that the top-down control settings which guide attentional selectivity can include perceptual features from different sensory modalities.


Sign in / Sign up

Export Citation Format

Share Document