Effects of Display Failures and Symbol Rotation on Visual Search Using Dot-Matrix Symbols

1988 ◽  
Vol 32 (19) ◽  
pp. 1386-1390
Author(s):  
Jennie J. Decker ◽  
Craig J. Dye ◽  
Ko Kurokawa ◽  
Charles J. C. Lloyd

This study was conducted to investigate the effects of display failures and rotation of dot-matrix symbols on visual search performance. The type of display failure (cell, horizontal line, vertical line), failure mode (ON, failures matched the symbols; OFF, failures matched the background), percentage of failures (0, 1, 2, 3, 4%), and rotation angle (0, 70, 105 degrees) were the variables examined. Results showed that displays which exhibit ON cell failures greater than 1% significantly affect search time performance. Cell failures degrade performance more than line failures. Search time and accuracy were best when symbols were oriented upright. The effects of display failures and rotation angle were found to be independent. Implications for display design and suggestions for quantifying the distortion due to rotation are discussed.

Author(s):  
Kaifeng Liu ◽  
Calvin Ka-lun Or

This is an eye-tracking study examining the effects of image segmentation and target number on visual search performance. A two-way repeated-measures computer-based visual search test was used for data collection. Thirty students participated in the test, in which they were asked to search for all of the Landolt Cs in 80 arrays of closed rings. The dependent variables were search time, accuracy, fixation count, and average fixation duration. Our principal findings were that some of the segmentation methods significantly improved accuracy, and reduced search time, fixation count, and average fixation duration, compared with the no-segmentation condition. Increased target number was found to be associated with longer search time, lower accuracy, more fixations, and longer average fixation duration. Our study indicates that although visual search tasks with multiple targets are relatively difficult, the visual search accuracy and efficiency can potentially be improved with the aid of image segmentation.


Author(s):  
Rachel J. Cunio ◽  
David Dommett ◽  
Joseph Houpt

Maintaining spatial awareness is a primary concern for operators, but relying only on visual displays can cause visual system overload and lead to performance decrements. Our study examined the benefits of providing spatialized auditory cues for maintaining visual awareness as a method of combating visual system overload. We examined visual search performance of seven participants in an immersive, dynamic (moving), three-dimensional, virtual reality environment both with no cues, non-masked, spatialized auditory cues, and masked, spatialized auditory cues. Results indicated a significant reduction in visual search time from the no-cue condition when either auditory cue type was presented, with the masked auditory condition slower. The results of this study can inform attempts to improve visual search performance in operational environments, such as determining appropriate display types for providing spatial information.


Author(s):  
Gary Perlman ◽  
J. Edward Swan

An experiment is reported in which the relative effectiveness of color coding, texture coding, and no coding of target borders to speed visual search is determined. The following independent variables were crossed in a within-subjects factorial design: Color coding (present or not), Texture coding (present or not), Distance between similarly coded targets (near or far), Group size of similarly coded targets (1, 2, 3, or 4), and a Replication factor of target Border width (10, 20, or 30 pixels). Search times, errors, and subjective rankings of the coding methods were recorded. Results showed that color coding improved search time compared to no coding, but that texture coding was not effectively used by subjects, resulting in nearly identical times to uncoded targets. Subjective preference rankings reflected the time data. The adequate power of the experiment along with the results of preparatory pilot studies lead us to the conclusion that texture coding is not an effective coding method for improving visual search time.


2021 ◽  
Vol 11 (3) ◽  
pp. 283
Author(s):  
Olga Lukashova-Sanz ◽  
Siegfried Wahl

Visual search becomes challenging when the time to find the target is limited. Here we focus on how performance in visual search can be improved via a subtle saliency-aware modulation of the scene. Specifically, we investigate whether blurring salient regions of the scene can improve participant’s ability to find the target faster when the target is located in non-salient areas. A set of real-world omnidirectional images were displayed in virtual reality with a search target overlaid on the visual scene at a pseudorandom location. Participants performed a visual search task in three conditions defined by blur strength, where the task was to find the target as fast as possible. The mean search time, and the proportion of trials where participants failed to find the target, were compared across different conditions. Furthermore, the number and duration of fixations were evaluated. A significant effect of blur on behavioral and fixation metrics was found using linear mixed models. This study shows that it is possible to improve the performance by a saliency-aware subtle scene modulation in a challenging realistic visual search scenario. The current work provides an insight into potential visual augmentation designs aiming to improve user’s performance in everyday visual search tasks.


1974 ◽  
Vol 18 (2) ◽  
pp. 158-170 ◽  
Author(s):  
John R. Bloomfield ◽  
Harvey H. Marmurek ◽  
Bruce G. Traub

This study provides a direct investigation of embedded target visual search situations. The relationships between measures of visual search performance, peripheral visual acuity and ratings of discriminability were determined. The embedded target displays were constructed using a color and a monochrome texture background. They were used in a rating study, in which the production magnitude rating method was used, and in visual search and peripheral acuity experiments. In the first of these, 28 observers rated the discriminability of five color targets from the color background, and of four black and white targets from the monochrome background. There were two visual search experiments. Five observers searched the color background for the color targets, and six searched the monochrome background for the black and white targets. In both experiments, after practice, there were sixty search trials per observer per target. The extent into the periphery, that the five color targets could be seen when the color display was exposed for 0. 3 seconds, was measured for the five observers used in the color search task. The same measurement was made with the black and white stimulus materials for four of the six observers used with the monochrome task. For the color stimulus materials, a set of simple relationships were found to describe the measures obtained in all three experimental areas. The results with the monochrome texture material did not fit the same equations so well. The equations were based on those developed by Howarth and Bloomfield (1969; Bloomfield and Howarth, 1969) for search situations involving targets that were confused with other nontarget objects. For the color display, the relationships between mean search time ( ), peripheral visual acuity (θ) and rated discriminability (D), could be summarized as follows: This is an encouraging finding. It leads one to hope that predictive procedures developed from these simple relationships can be applied in a wide range of complex search situations.


Author(s):  
Timothy H. Monk ◽  
Brian Brown

In an earlier study, Brown and Monk (1975) defined the area of display immediately adjacent to the target to be the “target surround”. Using highly specific configurations of nontargets in the target surround, they showed that congested target surrounds act to camouflage the target. The present study tests these results under more general conditions where no specific configurations are enforced. A linear increasing function is found between geometric mean search time and target surround density, using three measures of the latter. The implication of this result to studies of overall nontarget density is discussed.


Author(s):  
Karl F. van Orden ◽  
Joseph Divita ◽  
Matthew J. Shim

Three visual search experiments evaluated the benefits and distracting effects of using luminance and flashing to highlight subclasses of symbols coded by shape and color. Each of three general shape/color classes (circular/blue, diamond/red, square/yellow) was divided into three subclasses by presenting the upper half, lower half, or entire symbol. Increasing the luminance of a subclass by a factor of two did not result in a significant improvement in search performance. Flashing a subclass at a rate of 3 Hz resulted in a significantly shorter mean search time (48% improvement). Increasing the luminance of one subclass (by a factor of five) while simultaneously flashing another significantly improved search times by 31% and 43% respectively, compared with nonhighlighted search conditions. In each experiment, the search times for nonhighlighted target subclasses were not affected by the presence of brighter and flashing targets. The failure of the initial experiment to find a significant performance improvement caused by increasing symbol luminance suggested that a larger luminance increase was necessary for this code to be effective. The overall results suggest that using luminance and flashing to highlight subclasses of color-and shape-coded symbols can reduce search times for these subclasses without producing a distraction effect by way of a concomitant increase in the search times for unhighlighted symbols.


Author(s):  
Gary Perlman ◽  
J. Edward Swan

Previously, it had been found that texture-coding was ineffective at reducing search time (Perlman & Swan, 1993). In the experiment reported here, 16 subjects searched for blank-, color-, texture-, and density-coded targets of varying complexity in a naturalistic task. The data showed that all non-blank methods were significantly and about equally more effective at reducing search time than blank-coding (no coding). The difference of outcome with previous results is explained by task simplification and by the control of possibly confounding factors. The difference suggests that coding techniques using texture, and possibly other methods, should be evaluated in context. The similar performance of color-, texture-, and density-coding is explained by the use of equal-saturation and equal-brightness colors. Recommendations for the design of effective coding methods and for future research are discussed.


2007 ◽  
Vol 17 (04) ◽  
pp. 275-288 ◽  
Author(s):  
ANTONIO J. RODRIGUEZ-SANCHEZ ◽  
EVGUENI SIMINE ◽  
JOHN K. TSOTSOS

Selective Tuning (ST) presents a framework for modeling attention and in this work we show how it performs in covert visual search tasks by comparing its performance to human performance. Two implementations of ST have been developed. The Object Recognition Model recognizes and attends to simple objects formed by the conjunction of various features and the Motion Model recognizes and attends to motion patterns. The validity of the Object Recognition Model was first tested by successfully duplicating the results of Nagy and Sanchez. A second experiment was aimed at an evaluation of the model's performance against the observed continuum of search slopes for feature-conjunction searches of varying difficulty. The Motion Model was tested against two experiments dealing with searches in the visual motion domain. A simple odd-man-out search for counter-clockwise rotating octagons among identical clockwise rotating octagons produced linear increase in search time with the increase of set size. The second experiment was similar to one described by Thorton and Gilden. The results from both implementations agreed with the psychophysical data from the simulated experiments. We conclude that ST provides a valid explanatory mechanism for human covert visual search performance, an explanation going far beyond the conventional saliency map based explanations.


Author(s):  
D. C. Donderi ◽  
Sharon McFadden

Search times and errors were recorded for targets (a triangle or trapezoid) in marine radar, chart, and radar-chart overlay bitmap computer displays. Lossless JPEG and ZIP compressed file lengths were obtained for each display. The two types of file length were correlated and they predicted both the maximum time to search each display and the number of errors made per search. Compressed file length is analogous to algorithmic complexity, a theoretical measure of bit string complexity. It predicts both subjective complexity judgments (previous research) and search performance (this study) for a set of static marine electronic displays. The data suggest that compressed file length will predict minimum anticipated performance in a range of applied visual search tasks


Sign in / Sign up

Export Citation Format

Share Document