The Effects of Spatial Auditory Preview on Dynamic Visual Search Performance

Author(s):  
Bartholomew Elias

Since the auditory system is not spatially restricted like the visual system, spatial auditory cues can provide information regarding object position, velocity, and trajectory beyond the field of view. A laboratory experiment was conducted to demonstrate that visual displays can be augmented with dynamic spatial auditory cues that provide information regarding the motion characteristics of unseen objects. In this study, dynamic spatial auditory cues presented through headphones conveyed preview information regarding target position, velocity, and trajectory beyond the field of view in a dynamic visual search task in which subjects acquired and identified moving visual targets that traversed a display cluttered with varying numbers of moving distractors. The provision of spatial auditory preview significantly reduced response times to acquire and identify the visual targets and significantly reduced error rates, especially in cases when the visual display load was high. These findings demonstrate that providing dynamic spatial auditory preview cues is a viable mechanism for augmenting visual search performance in dynamic task environments.

Author(s):  
Ulrich Engelke ◽  
Andreas Duenser ◽  
Anthony Zeater

Selective attention is an important cognitive resource to account for when designing effective human-machine interaction and cognitive computing systems. Much of our knowledge about attention processing stems from search tasks that are usually framed around Treisman's feature integration theory and Wolfe's Guided Search. However, search performance in these tasks has mainly been investigated using an overt attention paradigm. Covert attention on the other hand has hardly been investigated in this context. To gain a more thorough understanding of human attentional processing and especially covert search performance, the authors have experimentally investigated the relationship between overt and covert visual search for targets under a variety of target/distractor combinations. The overt search results presented in this work agree well with the Guided Search studies by Wolfe et al. The authors show that the response times are considerably more influenced by the target/distractor combination than by the attentional search paradigm deployed. While response times are similar between the overt and covert search conditions, they found that error rates are considerably higher in covert search. They further show that response times between participants are stronger correlated as the search task complexity increases. The authors discuss their findings and put them into the context of earlier research on visual search.


Author(s):  
Clayton Rothwell ◽  
Griffin Romigh ◽  
Brian Simpson

As visual display complexity grows, visual cues and alerts may become less salient and therefore less effective. Although the auditory system’s resolution is rather coarse relative to the visual system, there is some evidence for virtual spatialized audio to benefit visual search on a small frontal region, such as a desktop monitor. Two experiments examined if search times could be reduced compared to visual-only search through spatial auditory cues rendered using one of two methods: individualized or generic head-related transferfunctions. Results showed the cue type interacted with display complexity, with larger reductions compared to visual-only search as set size increased. For larger set sizes, individualized cues were significantly better than generic cues overall. Across all set sizes, individualized cues were better than generic cues for cueing eccentric elevations (>±8°). Where performance must be maximized, designers should use individualized virtual audio if at all possible, even in small frontal region within the field of view.


2021 ◽  
Vol 2 ◽  
Author(s):  
Zekun Cao ◽  
Jeronimo Grandi ◽  
Regis Kopper

Dynamic field of view (FOV) restrictors have been successfully used to reduce visually induced motion sickness (VIMS) during continuous viewpoint motion control (virtual travel) in virtual reality (VR). This benefit, however, comes at the cost of losing peripheral awareness during provocative motion. Likewise, the use of visual references that are stable in relation to the physical environment, called rest frames (RFs), has also been shown to reduce discomfort during virtual travel tasks in VR. We propose a new RF-based design called Granulated Rest Frames (GRFs) with a soft-edged circular cutout in the center that leverages the rest frames’ benefits without completely blocking the user’s peripheral view. The GRF design is application-agnostic and does not rely on context-specific RFs, such as commonly used cockpits. We report on a within-subjects experiment with 20 participants. The results suggest that, by strategically applying GRFs during a visual search session in VR, we can achieve better item searching efficiency as compared to restricted FOV. The effect of GRFs on reducing VIMS remains to be determined by future work.


Author(s):  
Rachel J. Cunio ◽  
David Dommett ◽  
Joseph Houpt

Maintaining spatial awareness is a primary concern for operators, but relying only on visual displays can cause visual system overload and lead to performance decrements. Our study examined the benefits of providing spatialized auditory cues for maintaining visual awareness as a method of combating visual system overload. We examined visual search performance of seven participants in an immersive, dynamic (moving), three-dimensional, virtual reality environment both with no cues, non-masked, spatialized auditory cues, and masked, spatialized auditory cues. Results indicated a significant reduction in visual search time from the no-cue condition when either auditory cue type was presented, with the masked auditory condition slower. The results of this study can inform attempts to improve visual search performance in operational environments, such as determining appropriate display types for providing spatial information.


Author(s):  
John P. McIntire ◽  
Paul R. Havig ◽  
Scott N. J. Watamaniuk ◽  
Robert H. Gilkey

Author(s):  
Megan H. Papesh ◽  
Michael C. Hout ◽  
Juan D. Guevara Pinto ◽  
Arryn Robbins ◽  
Alexis Lopez

AbstractDomain-specific expertise changes the way people perceive, process, and remember information from that domain. This is often observed in visual domains involving skilled searches, such as athletics referees, or professional visual searchers (e.g., security and medical screeners). Although existing research has compared expert to novice performance in visual search, little work has directly documented how accumulating experiences change behavior. A longitudinal approach to studying visual search performance may permit a finer-grained understanding of experience-dependent changes in visual scanning, and the extent to which various cognitive processes are affected by experience. In this study, participants acquired experience by taking part in many experimental sessions over the course of an academic semester. Searchers looked for 20 categories of targets simultaneously (which appeared with unequal frequency), in displays with 0–3 targets present, while having their eye movements recorded. With experience, accuracy increased and response times decreased. Fixation probabilities and durations decreased with increasing experience, but saccade amplitudes and visual span increased. These findings suggest that the behavioral benefits endowed by expertise emerge from oculomotor behaviors that reflect enhanced reliance on memory to guide attention and the ability to process more of the visual field within individual fixations.


2019 ◽  
Vol 121 (4) ◽  
pp. 1300-1314 ◽  
Author(s):  
Mathieu Servant ◽  
Gabriel Tillman ◽  
Jeffrey D. Schall ◽  
Gordon D. Logan ◽  
Thomas J. Palmeri

Stochastic accumulator models account for response times and errors in perceptual decision making by assuming a noisy accumulation of perceptual evidence to a threshold. Previously, we explained saccade visual search decision making by macaque monkeys with a stochastic multiaccumulator model in which accumulation was driven by a gated feed-forward integration to threshold of spike trains from visually responsive neurons in frontal eye field that signal stimulus salience. This neurally constrained model quantitatively accounted for response times and errors in visual search for a target among varying numbers of distractors and replicated the dynamics of presaccadic movement neurons hypothesized to instantiate evidence accumulation. This modeling framework suggested strategic control over gate or over threshold as two potential mechanisms to accomplish speed-accuracy tradeoff (SAT). Here, we show that our gated accumulator model framework can account for visual search performance under SAT instructions observed in a milestone neurophysiological study of frontal eye field. This framework captured key elements of saccade search performance, through observed modulations of neural input, as well as flexible combinations of gate and threshold parameters necessary to explain differences in SAT strategy across monkeys. However, the trajectories of the model accumulators deviated from the dynamics of most presaccadic movement neurons. These findings demonstrate that traditional theoretical accounts of SAT are incomplete descriptions of the underlying neural adjustments that accomplish SAT, offer a novel mechanistic account of decision-making mechanisms during speed-accuracy tradeoff, and highlight questions regarding the identity of model and neural accumulators. NEW & NOTEWORTHY A gated accumulator model is used to elucidate neurocomputational mechanisms of speed-accuracy tradeoff. Whereas canonical stochastic accumulators adjust strategy only through variation of an accumulation threshold, we demonstrate that strategic adjustments are accomplished by flexible combinations of both modulation of the evidence representation and adaptation of accumulator gate and threshold. The results indicate how model-based cognitive neuroscience can translate between abstract cognitive models of performance and neural mechanisms of speed-accuracy tradeoff.


1976 ◽  
Vol 20 (10) ◽  
pp. 204-209
Author(s):  
John R. Bloomfield ◽  
John A. Modrick

This approach to visual search introduces the concepts of organization, variable field of view and congratulation. It builds upon ideas developed in glimpse/detection lobe models of visual search. It suggests experiments that go beyond an assessment of variables that affect visual search performance, in the hope that these will eventually lead to a comprehensive cognitive theory of visual search.


Sign in / Sign up

Export Citation Format

Share Document