scholarly journals Spatial Auditory Cueing for a Dynamic Three-Dimensional Virtual Reality Visual Search Task

Author(s):  
Rachel J. Cunio ◽  
David Dommett ◽  
Joseph Houpt

Maintaining spatial awareness is a primary concern for operators, but relying only on visual displays can cause visual system overload and lead to performance decrements. Our study examined the benefits of providing spatialized auditory cues for maintaining visual awareness as a method of combating visual system overload. We examined visual search performance of seven participants in an immersive, dynamic (moving), three-dimensional, virtual reality environment both with no cues, non-masked, spatialized auditory cues, and masked, spatialized auditory cues. Results indicated a significant reduction in visual search time from the no-cue condition when either auditory cue type was presented, with the masked auditory condition slower. The results of this study can inform attempts to improve visual search performance in operational environments, such as determining appropriate display types for providing spatial information.

Author(s):  
Kaifeng Liu ◽  
Calvin Ka-lun Or

This is an eye-tracking study examining the effects of image segmentation and target number on visual search performance. A two-way repeated-measures computer-based visual search test was used for data collection. Thirty students participated in the test, in which they were asked to search for all of the Landolt Cs in 80 arrays of closed rings. The dependent variables were search time, accuracy, fixation count, and average fixation duration. Our principal findings were that some of the segmentation methods significantly improved accuracy, and reduced search time, fixation count, and average fixation duration, compared with the no-segmentation condition. Increased target number was found to be associated with longer search time, lower accuracy, more fixations, and longer average fixation duration. Our study indicates that although visual search tasks with multiple targets are relatively difficult, the visual search accuracy and efficiency can potentially be improved with the aid of image segmentation.


1993 ◽  
Vol 2 (1) ◽  
pp. 44-53 ◽  
Author(s):  
Kristinn R. Thorisson

The most common visual feedback technique in teleoperation is in the form of monoscopic video displays. As robotic autonomy increases and the human operator takes on the role of a supervisor, three-dimensional information is effectively presented by multiple, televised, two-dimensional (2-D) projections showing the same scene from different angles. To analyze how people go about using such segmented information for estimations about three-dimensional (3-D) space, 18 subjects were asked to determine the position of a stationary pointer in space; eye movements and reaction times (RTs) were recorded during a period when either two or three 2-D views were presented simultaneously, each showing the same scene from a different angle. The results revealed that subjects estimated 3-D space by using a simple algorithm of feature search. Eye movement analysis supported the conclusion that people can efficiently use multiple 2-D projections to make estimations about 3-D space without reconstructing the scene mentally in three dimensions. The major limiting factor on RT in such situations is the subjects' visual search performance, giving in this experiment a mean of 2270 msec (SD = 468; N = 18). This conclusion was supported by predictions of the Model Human Processor (Card, Moran, & Newell, 1983), which predicted a mean RT of 1820 msec given the general eye movement patterns observed. Single-subject analysis of the experimental data suggested further that in some cases people may base their judgments on a more elaborate 3-D mental model reconstructed from the available 2-D views. In such situations, RTs and visual search patterns closely resemble those found in the mental rotation paradigm (Just & Carpenter, 1976), giving RTs in the range of 5-10 sec.


Author(s):  
John P. McIntire ◽  
Paul R. Havig ◽  
Scott N. J. Watamaniuk ◽  
Robert H. Gilkey

Author(s):  
Karl F. Van Orden ◽  
Joseph DiVita

Previous research has demonstrated that search times are reduced when flicker is used to highlight color coded symbols, but that flicker is not distracting when subjects must search for non-highlighted symbols. This prompted an examination of flicker and other stimulus dimensions in a conjunctive search paradigm. In all experiments, at least 15 subjects completed a minimum of 330 trials in which they indicated the presence or absence of target stimuli on a CRT display that contained either 8, 16 or 32 items. In Experiment 1, subjects searched for blue-steady or red-flickering (5.6 Hz) circular targets among blue-flickering and red-steady distractors. Blue-steady targets produced a more efficient search rate (11.6 msec/item) than red-flickering targets (19.3 msec/item). In Experiment 2, a conjunction of flicker and size (large and small filled circles) yielded the opposite results; the search performance for large-flickering targets was unequivocally parallel. In Experiment 3, conjunctions of form and flicker yielded highly serial search performance. The findings are consistent with the response properties of parvo and magnocellular channels of the early visual system, and suggest that search is most efficient when one of these channels can be filtered completely.


Author(s):  
Gary Perlman ◽  
J. Edward Swan

An experiment is reported in which the relative effectiveness of color coding, texture coding, and no coding of target borders to speed visual search is determined. The following independent variables were crossed in a within-subjects factorial design: Color coding (present or not), Texture coding (present or not), Distance between similarly coded targets (near or far), Group size of similarly coded targets (1, 2, 3, or 4), and a Replication factor of target Border width (10, 20, or 30 pixels). Search times, errors, and subjective rankings of the coding methods were recorded. Results showed that color coding improved search time compared to no coding, but that texture coding was not effectively used by subjects, resulting in nearly identical times to uncoded targets. Subjective preference rankings reflected the time data. The adequate power of the experiment along with the results of preparatory pilot studies lead us to the conclusion that texture coding is not an effective coding method for improving visual search time.


2018 ◽  
Vol 18 (10) ◽  
pp. 657
Author(s):  
Hugo Chow-Wing-Bom ◽  
Tessa Dekker ◽  
Pete Jones

1988 ◽  
Vol 32 (19) ◽  
pp. 1386-1390
Author(s):  
Jennie J. Decker ◽  
Craig J. Dye ◽  
Ko Kurokawa ◽  
Charles J. C. Lloyd

This study was conducted to investigate the effects of display failures and rotation of dot-matrix symbols on visual search performance. The type of display failure (cell, horizontal line, vertical line), failure mode (ON, failures matched the symbols; OFF, failures matched the background), percentage of failures (0, 1, 2, 3, 4%), and rotation angle (0, 70, 105 degrees) were the variables examined. Results showed that displays which exhibit ON cell failures greater than 1% significantly affect search time performance. Cell failures degrade performance more than line failures. Search time and accuracy were best when symbols were oriented upright. The effects of display failures and rotation angle were found to be independent. Implications for display design and suggestions for quantifying the distortion due to rotation are discussed.


2020 ◽  
Vol 10 (7) ◽  
pp. 446
Author(s):  
Nico Marek ◽  
Stefan Pollmann

In visual search, participants can incidentally learn spatial target-distractor configurations, leading to shorter search times for repeated compared to novel configurations. Usually, this is tested within the limited visual field provided by a computer monitor. While contextual cueing is typically investigated on two-dimensional screens, we present for the first time an implementation of a classic contextual cueing task (search for a T-shape among L-shapes) in a three-dimensional virtual environment. This enabled us to test if the typical finding of incidental learning of repeated search configurations, manifested by shorter search times, would hold in a three-dimensional virtual reality (VR) environment. One specific aspect that was tested by combining virtual reality and contextual cueing was if contextual cueing would hold for targets outside the initial field of view (FOV), requiring head movements to be found. In keeping with two-dimensional search studies, reduced search times were observed after the first epoch and remained stable in the remaining experiment. Importantly, comparable search time reductions were observed for targets both within and outside of the initial FOV. The results show that a repeated distractors-only configuration in the initial FOV can guide search for target locations requiring a head movement to be seen.


2019 ◽  
Author(s):  
Yunhui Zhou ◽  
Yuguo Yu

AbstractHumans perform sequences of eye movements to search for a target in complex environment, but the efficiency of human search strategy is still controversial. Previous studies showed that humans can optimally integrate information across fixations and determine the next fixation location. However, their models ignored the temporal control of eye movement, ignored the limited human memory capacity, and the model prediction did not agree with details of human eye movement metrics well. Here, we measured the temporal course of human visibility map and recorded the eye movements of human subjects performing a visual search task. We further built a continuous-time eye movement model which considered saccadic inaccuracy, saccadic bias, and memory constraints in the visual system. This model agreed with many spatial and temporal properties of human eye movements, and showed several similar statistical dependencies between successive eye movements. In addition, our model also predicted that the human saccade decision is shaped by a memory capacity of around 8 recent fixations. These results suggest that human visual search strategy is not strictly optimal in the sense of fully utilizing the visibility map, but instead tries to balance between search performance and the costs to perform the task.Author SummaryDuring visual search, how do humans determine when and where to make eye movement is an important unsolved issue. Previous studies suggested that human can optimally use the visibility map to determine fixation locations, but we found that such model didn’t agree with details of human eye movement metrics because it ignored several realistic biological limitations of human brain functions, and couldn’t explain the temporal control of eye movements. Instead, we showed that considering the temporal course of visual processing and several constrains of the visual system could greatly improve the prediction on the spatiotemporal properties of human eye movement while only slightly affected the search performance in terms of median fixation numbers. Therefore, humans may not use the visibility map in a strictly optimal sense, but tried to balance between search performance and the costs to perform the task.


Sign in / Sign up

Export Citation Format

Share Document