Ergonomics of design and use of visual display terminals (VDTs) in offices. Part 3. Specification for visual displays

1992 ◽  
Vol 23 (3) ◽  
pp. 209
1995 ◽  
Vol 83 (6) ◽  
pp. 1184-1193 ◽  
Author(s):  
Keerti Gurushanthaiah ◽  
Matthew B. Weinger ◽  
Carl E. Englund

Abstract Background Anesthesiologists use data presented on visual displays to monitor patients' physiologic status. Although studies in nonmedical fields have suggested differential effects on performance among display formats, few studies have examined the effect of display format on anesthesiologist monitoring performance.


1986 ◽  
Vol 30 (7) ◽  
pp. 675-678 ◽  
Author(s):  
Robert G. Eggleston ◽  
Richard A. Chechile ◽  
Rebecca N. Fleischman

An approach for measuring the cognitive complexity of visual display formats is presented. The approach involves modeling both the knowledge that can be extracted from a format and the knowledge an operator brings to a task. A semantic network formalism is developed to capture task-relevant knowledge, from which four orthogonal predictor measures of cognitive complexity are derived. In an experiment, seven different avionic missions, performed with the aid of a horizontal situation display, were studied, and three of the predictor measures were found to correlate significantly with observed task difficulty. The results indicate that a semantic network formalism can be used to produce an objective metric of format quality in terms of cognitive complexity.


Author(s):  
W. Peter Colquhoun

Using a task which closely simulated the actual output from a sonar device, the performance of 12 subjects was observed for a total of 115 hr in repeated prolonged monitoring sessions under auditory, visual, and dual-mode display conditions. Despite an underlying basic superiority of signal discriminability on the visual display, and the occurrence of long-term practice effects, detection rate was consistently and substantially higher under the auditory condition, and higher again with the dual-mode display. These results are similar to those obtained by earlier workers using artificial laboratory tasks for shorter periods, and are consistent with the notion that auditory displays have greater attention-gaining capacity in a “vigilance” situation. A further comparison of the auditory and visual displays was then made in an “alerted” situation, where the possible occurrence of a signal was indicated by a warning stimulus in the alternative sensory mode. Ten subjects were observed for a total of 57 hr in these conditions, under which performance was found to be clearly superior with the visual display. Cross-modal correlations of performance indicated the presence of a common factor of signal detectability within subjects. It was concluded that where efficiency both in the initial detection of targets and their subsequent identification and tracking are equally important, the best solution would seem to be to retain both auditory and visual displays and to ensure that these are monitored concurrently.


2017 ◽  
Vol 284 (1864) ◽  
pp. 20171774 ◽  
Author(s):  
Paweł Ręk ◽  
Robert D. Magrath

Many group-living animals cooperatively signal to defend resources, but what stops deceptive signalling to competitors about coalition strength? Cooperative-signalling species include mated pairs of birds that sing duets to defend their territory. Individuals of these species sometimes sing ‘pseudo-duets’ by mimicking their partner's contribution, but it is unknown if these songs are deceptive, or why duets are normally reliable. We studied pseudo-duets in Australian magpie-larks, Grallina cyanoleuca , and tested whether multimodal signalling constrains deception. Magpie-larks give antiphonal duets coordinated with a visual display, with each sex typically choosing a different song type within the duet. Individuals produced pseudo-duets almost exclusively during nesting when partners were apart, but the two song types were used in sequence rather than antiphonally. Strikingly, birds hid and gave no visual displays, implying deceptive suppression of information. Acoustic playbacks showed that pseudo-duets provoked the same response from residents as true duets, regardless of whether they were sequential or antiphonal, and stronger response than that to true duets consisting of a single song type. By contrast, experiments with robot models showed that songs accompanied by movements of two birds prompted stronger responses than songs accompanied by movements of one bird, irrespective of the number of song types or singers. We conclude that magpie-larks used deceptive pseudo-duets when partners were apart, and suppressed the visual display to maintain the subterfuge. We suggest that the visual component of many species' duets provides the most reliable information about the number of signallers and may have evolved to maintain honesty in duet communication.


Author(s):  
Agustín J Elias-Costa ◽  
Julián Faivovich

Abstract Cascades and fast-flowing streams impose severe restrictions on acoustic communication, with loud broadband background noise hampering signal detection and recognition. In this context, diverse behavioural features, such as ultrasound production and visual displays, have arisen in the evolutionary history of torrent-dwelling amphibians. The importance of the vocal sac in multimodal communication is being increasingly recognized, and recently a new vocal sac visual display has been discovered: unilateral inflation of paired vocal sacs. In the diurnal stream-breeding Hylodidae from the Atlantic forest, where it was first described, this behaviour is likely to be enabled by a unique anatomical configuration of the vocal sacs. To assess whether other taxa share this exceptional structure, we surveyed torrent-dwelling species with paired vocal sacs across the anuran tree of life and examined the vocal sac anatomy of exemplar species across 18 families. We found striking anatomical convergence among hylodids and species of the distantly related basal ranid genera Staurois, Huia, Meristogenys and Amolops. Ancestral character state reconstruction identified three new synapomorphies for Ranidae. Furthermore, we surveyed the vocal sac configuration of other anuran species that perform visual displays and report observations on what appears to be unilateral inflation of paired vocal sacs, in Staurois guttatus – an extremely rare behaviour in anurans.


1977 ◽  
Vol 45 (3_suppl) ◽  
pp. 1171-1178
Author(s):  
R. B. Lawson ◽  
Cynthia Whitmore ◽  
Dawn Lawrence

Detection thresholds for targets displayed against two- and three-dimensional backgrounds were measured under backward masking and non-masking conditions. The results indicate that planar ring targets displayed against a two-dimensional ground are easier to mask than identical targets portrayed against a three-dimensional background. Also, the detectability of a planar ring target is enhanced when it is included within a three-dimensional rather than an identical but two-dimensional visual display. These results confirm and extend previous findings and suggest a processing asymmetry biased toward three-dimensional visual displays.


1981 ◽  
Vol 25 (1) ◽  
pp. 454-456
Author(s):  
Margaret M. Clarke ◽  
John Garin ◽  
Andrea Preston-Anderson

Optimal closed-circuit television viewing is a critical component in the development of remote teleoperator systems for the performance of complex work in high radiation environments. This paper describes the development of a methodology whereby the visual display components of such a system can be optimized from a human factors viewpoint. The following steps were taken: (1) Identification of generic remote tasks using chronology of typical operations, equipment specifications, and projections of maintenance and repair requirements. (2) Specification of task remote visual cue requirements on which experimental remote visual displays (independent variables) were designed. (3) Measurement of subjects’ task-related characteristics: general aptitude, work history, television habits, vision, and visual motor dexterity. (4) Design and implementation of an experiment to identify optimal remote visual displays as related to remote task performance (dependent variable). (5) Treatment of learning effects. The paper concludes with a discussion of the advantages of this method, the incorporation of subjects and experimental observations into future studies, and suggestions for the applicability of this methodology to other related remote visual display areas.


Author(s):  
Victor S. Finomore ◽  
Christopher K. McClernon ◽  
Jantz V. Johnson ◽  
Jacob K. Snow ◽  
Jessica M. Steuber

Head mounted displays (HMDs) are being explored as an alternative means of displaying relevant information to dismounted operators. The goal of this project was to examine different visual display concepts and evaluate participant’s attention allocation to information presented on their HMD. Additionally, their ability to detect potential threats in the environment was also evaluated. This information will help revamp the design of information displays for HMDs. The task in this study required participants to monitor their HMD for critical alerts and respond accordingly while also making shoot/no shoot decisions to threats in their environment. We hypothesized that as information is presented in different layouts on the HMD, it will reduce the participants’ ability to detect real world events. Accuracy of the shoot/no shoot decisions was collected along with accuracy of detection of information on the HMD. We found that shooting performance was not affected between the three HMD layouts however information detected on the HMD was worst when all information was in the center of the HMD. The data from this study will be used to help develop intelligent visual displays used by Battlefield Airmen to accomplish their mission.


2000 ◽  
Vol 9 (6) ◽  
pp. 557-580 ◽  
Author(s):  
Russell L. Storms ◽  
Michael J. Zyda

The quality of realism in virtual environments (VEs) is typically considered to be a function of visual and audio fidelity mutually exclusive of each other. However, the VE participant, being human, is multimodal by nature. Therefore, in order to validate more accurately the levels of auditory and visual fidelity that are required in a virtual environment, a better understanding is needed of the intersensory or crossmodal effects between the auditory and visual sense modalities. To identify whether any pertinent auditory-visual cross-modal perception phenomena exist, 108 subjects participated in three experiments which were completely automated using HTML, Java, and JavaScript programming languages. Visual and auditory display quality perceptions were measured intraand intermodally by manipulating the pixel resolution of the visual display and Gaussian white noise level, and by manipulating the sampling frequency of the auditory display and Gaussian white noise level. Statistically significant results indicate that high-quality auditory displays coupled with highquality visual displays increase the quality perception of the visual displays relative to the evaluation of the visual display alone, and that low-quality auditory displays coupled with high-quality visual displays decrease the quality perception of the auditory displays relative to the evaluation of the auditory display alone. These findings strongly suggest that the quality of realism in VEs must be a function of both auditory and visual display fidelities inclusive of each other.


Perception ◽  
1994 ◽  
Vol 23 (11) ◽  
pp. 1369-1386 ◽  
Author(s):  
Doug Mahar ◽  
Brian Mackenzie ◽  
Don McNicol

The extent to which auditory, tactile, and visual perceptual representations are similar, particularly when dealing with speech and speech-like stimuli, was investigated. It was found that comparisons between auditory and tactile patterns were easier to perform than were similar comparisons between auditory and visual stimuli. This was true across a variety of styles of tactile and visual display, and was not due to limitations in the discriminability of the visual displays. The findings suggest that auditory and tactile representations of stimuli are more alike than are auditory and visual ones. It was also found that touch and vision differ in terms of the style of information distribution which they process most efficiently. Touch dealt with patterns best when the pattern was characterised by changes across time, whereas vision did best when spatially or spatiotemporally distributed patterns were presented. As the sense of hearing also seems to specialise in the processing of temporally ordered patterns, these results suggest one way in which the senses of hearing and touch differ from vision.


Sign in / Sign up

Export Citation Format

Share Document