Visual Search in Simple Simulations of Realistic Scenes

1993 ◽  
Vol 77 (3) ◽  
pp. 867-881 ◽  
Author(s):  
Theo Boersema ◽  
Harm J. G. Zwaga ◽  
Kees Jorens

The effect distracting objects have on visual-search performance in real-life situations cannot readily be predicted from current search theories. The validity of an approach to close this gap was tested by comparing search performance for color slides of scenes in public buildings with performance for simplified computer-generated images derived from these slides. The target was always a blue rectangle in both the original slides of scenes and the computer simulations. The distractors were differently colored rectangles (not blue), and their number was varied systematically. Analysis showed a significant linear increase in search time with number of distractors, which challenges predictions of current search theories. An explanation for this contradiction is proposed. Also, search times for color slides were significantly longer than those for computer images; however, there was no significant interaction between type of stimulus and number of distractors. It is concluded that the simulated scenes yielded adequate predictions of the effect of distracting objects on search performance in real-life situations.

2007 ◽  
Vol 17 (04) ◽  
pp. 275-288 ◽  
Author(s):  
ANTONIO J. RODRIGUEZ-SANCHEZ ◽  
EVGUENI SIMINE ◽  
JOHN K. TSOTSOS

Selective Tuning (ST) presents a framework for modeling attention and in this work we show how it performs in covert visual search tasks by comparing its performance to human performance. Two implementations of ST have been developed. The Object Recognition Model recognizes and attends to simple objects formed by the conjunction of various features and the Motion Model recognizes and attends to motion patterns. The validity of the Object Recognition Model was first tested by successfully duplicating the results of Nagy and Sanchez. A second experiment was aimed at an evaluation of the model's performance against the observed continuum of search slopes for feature-conjunction searches of varying difficulty. The Motion Model was tested against two experiments dealing with searches in the visual motion domain. A simple odd-man-out search for counter-clockwise rotating octagons among identical clockwise rotating octagons produced linear increase in search time with the increase of set size. The second experiment was similar to one described by Thorton and Gilden. The results from both implementations agreed with the psychophysical data from the simulated experiments. We conclude that ST provides a valid explanatory mechanism for human covert visual search performance, an explanation going far beyond the conventional saliency map based explanations.


Author(s):  
Kaifeng Liu ◽  
Calvin Ka-lun Or

This is an eye-tracking study examining the effects of image segmentation and target number on visual search performance. A two-way repeated-measures computer-based visual search test was used for data collection. Thirty students participated in the test, in which they were asked to search for all of the Landolt Cs in 80 arrays of closed rings. The dependent variables were search time, accuracy, fixation count, and average fixation duration. Our principal findings were that some of the segmentation methods significantly improved accuracy, and reduced search time, fixation count, and average fixation duration, compared with the no-segmentation condition. Increased target number was found to be associated with longer search time, lower accuracy, more fixations, and longer average fixation duration. Our study indicates that although visual search tasks with multiple targets are relatively difficult, the visual search accuracy and efficiency can potentially be improved with the aid of image segmentation.


Author(s):  
Rachel J. Cunio ◽  
David Dommett ◽  
Joseph Houpt

Maintaining spatial awareness is a primary concern for operators, but relying only on visual displays can cause visual system overload and lead to performance decrements. Our study examined the benefits of providing spatialized auditory cues for maintaining visual awareness as a method of combating visual system overload. We examined visual search performance of seven participants in an immersive, dynamic (moving), three-dimensional, virtual reality environment both with no cues, non-masked, spatialized auditory cues, and masked, spatialized auditory cues. Results indicated a significant reduction in visual search time from the no-cue condition when either auditory cue type was presented, with the masked auditory condition slower. The results of this study can inform attempts to improve visual search performance in operational environments, such as determining appropriate display types for providing spatial information.


2020 ◽  
Author(s):  
Nir Shalev ◽  
Sage Boettcher ◽  
Hannah Wilkinson ◽  
Gaia Scerif ◽  
Anna C. Nobre

It is believed that children have difficulties in guiding attention while facing distraction. However, developmental accounts of spatial attention rely on traditional search designs using static displays. In real life, dynamic environments can embed regularities that afford anticipation and benefit performance. We developed a dynamic visual-search task to test the ability of children to benefit from spatio-temporal regularities to detect goal-relevant targets appearing within an extended dynamic context amidst irrelevant distracting stimuli. We compared children and adults in detecting predictable vs. unpredictable targets fading in and out among competing distracting stimuli. While overall search performance was poorer in children, both groups detected more predictable targets. This effect was confined to task-relevant information. Additionally, we report how predictions are related to individual differences in attention. Altogether, our results indicate a striking capacity of prediction-led guidance towards task-relevant information in dynamic environments, refining traditional views about poor goal-driven attention in childhood.


Author(s):  
Gary Perlman ◽  
J. Edward Swan

An experiment is reported in which the relative effectiveness of color coding, texture coding, and no coding of target borders to speed visual search is determined. The following independent variables were crossed in a within-subjects factorial design: Color coding (present or not), Texture coding (present or not), Distance between similarly coded targets (near or far), Group size of similarly coded targets (1, 2, 3, or 4), and a Replication factor of target Border width (10, 20, or 30 pixels). Search times, errors, and subjective rankings of the coding methods were recorded. Results showed that color coding improved search time compared to no coding, but that texture coding was not effectively used by subjects, resulting in nearly identical times to uncoded targets. Subjective preference rankings reflected the time data. The adequate power of the experiment along with the results of preparatory pilot studies lead us to the conclusion that texture coding is not an effective coding method for improving visual search time.


1988 ◽  
Vol 32 (19) ◽  
pp. 1386-1390
Author(s):  
Jennie J. Decker ◽  
Craig J. Dye ◽  
Ko Kurokawa ◽  
Charles J. C. Lloyd

This study was conducted to investigate the effects of display failures and rotation of dot-matrix symbols on visual search performance. The type of display failure (cell, horizontal line, vertical line), failure mode (ON, failures matched the symbols; OFF, failures matched the background), percentage of failures (0, 1, 2, 3, 4%), and rotation angle (0, 70, 105 degrees) were the variables examined. Results showed that displays which exhibit ON cell failures greater than 1% significantly affect search time performance. Cell failures degrade performance more than line failures. Search time and accuracy were best when symbols were oriented upright. The effects of display failures and rotation angle were found to be independent. Implications for display design and suggestions for quantifying the distortion due to rotation are discussed.


2021 ◽  
Vol 11 (3) ◽  
pp. 283
Author(s):  
Olga Lukashova-Sanz ◽  
Siegfried Wahl

Visual search becomes challenging when the time to find the target is limited. Here we focus on how performance in visual search can be improved via a subtle saliency-aware modulation of the scene. Specifically, we investigate whether blurring salient regions of the scene can improve participant’s ability to find the target faster when the target is located in non-salient areas. A set of real-world omnidirectional images were displayed in virtual reality with a search target overlaid on the visual scene at a pseudorandom location. Participants performed a visual search task in three conditions defined by blur strength, where the task was to find the target as fast as possible. The mean search time, and the proportion of trials where participants failed to find the target, were compared across different conditions. Furthermore, the number and duration of fixations were evaluated. A significant effect of blur on behavioral and fixation metrics was found using linear mixed models. This study shows that it is possible to improve the performance by a saliency-aware subtle scene modulation in a challenging realistic visual search scenario. The current work provides an insight into potential visual augmentation designs aiming to improve user’s performance in everyday visual search tasks.


1974 ◽  
Vol 18 (2) ◽  
pp. 158-170 ◽  
Author(s):  
John R. Bloomfield ◽  
Harvey H. Marmurek ◽  
Bruce G. Traub

This study provides a direct investigation of embedded target visual search situations. The relationships between measures of visual search performance, peripheral visual acuity and ratings of discriminability were determined. The embedded target displays were constructed using a color and a monochrome texture background. They were used in a rating study, in which the production magnitude rating method was used, and in visual search and peripheral acuity experiments. In the first of these, 28 observers rated the discriminability of five color targets from the color background, and of four black and white targets from the monochrome background. There were two visual search experiments. Five observers searched the color background for the color targets, and six searched the monochrome background for the black and white targets. In both experiments, after practice, there were sixty search trials per observer per target. The extent into the periphery, that the five color targets could be seen when the color display was exposed for 0. 3 seconds, was measured for the five observers used in the color search task. The same measurement was made with the black and white stimulus materials for four of the six observers used with the monochrome task. For the color stimulus materials, a set of simple relationships were found to describe the measures obtained in all three experimental areas. The results with the monochrome texture material did not fit the same equations so well. The equations were based on those developed by Howarth and Bloomfield (1969; Bloomfield and Howarth, 1969) for search situations involving targets that were confused with other nontarget objects. For the color display, the relationships between mean search time ( ), peripheral visual acuity (θ) and rated discriminability (D), could be summarized as follows: This is an encouraging finding. It leads one to hope that predictive procedures developed from these simple relationships can be applied in a wide range of complex search situations.


Author(s):  
Timothy H. Monk ◽  
Brian Brown

In an earlier study, Brown and Monk (1975) defined the area of display immediately adjacent to the target to be the “target surround”. Using highly specific configurations of nontargets in the target surround, they showed that congested target surrounds act to camouflage the target. The present study tests these results under more general conditions where no specific configurations are enforced. A linear increasing function is found between geometric mean search time and target surround density, using three measures of the latter. The implication of this result to studies of overall nontarget density is discussed.


Sign in / Sign up

Export Citation Format

Share Document