Evaluation of Auditory, Visual, and Dual-Mode Displays for Prolonged Sonar Monitoring in Repeated Sessions

Author(s):  
W. Peter Colquhoun

Using a task which closely simulated the actual output from a sonar device, the performance of 12 subjects was observed for a total of 115 hr in repeated prolonged monitoring sessions under auditory, visual, and dual-mode display conditions. Despite an underlying basic superiority of signal discriminability on the visual display, and the occurrence of long-term practice effects, detection rate was consistently and substantially higher under the auditory condition, and higher again with the dual-mode display. These results are similar to those obtained by earlier workers using artificial laboratory tasks for shorter periods, and are consistent with the notion that auditory displays have greater attention-gaining capacity in a “vigilance” situation. A further comparison of the auditory and visual displays was then made in an “alerted” situation, where the possible occurrence of a signal was indicated by a warning stimulus in the alternative sensory mode. Ten subjects were observed for a total of 57 hr in these conditions, under which performance was found to be clearly superior with the visual display. Cross-modal correlations of performance indicated the presence of a common factor of signal detectability within subjects. It was concluded that where efficiency both in the initial detection of targets and their subsequent identification and tracking are equally important, the best solution would seem to be to retain both auditory and visual displays and to ensure that these are monitored concurrently.

2000 ◽  
Vol 9 (6) ◽  
pp. 557-580 ◽  
Author(s):  
Russell L. Storms ◽  
Michael J. Zyda

The quality of realism in virtual environments (VEs) is typically considered to be a function of visual and audio fidelity mutually exclusive of each other. However, the VE participant, being human, is multimodal by nature. Therefore, in order to validate more accurately the levels of auditory and visual fidelity that are required in a virtual environment, a better understanding is needed of the intersensory or crossmodal effects between the auditory and visual sense modalities. To identify whether any pertinent auditory-visual cross-modal perception phenomena exist, 108 subjects participated in three experiments which were completely automated using HTML, Java, and JavaScript programming languages. Visual and auditory display quality perceptions were measured intraand intermodally by manipulating the pixel resolution of the visual display and Gaussian white noise level, and by manipulating the sampling frequency of the auditory display and Gaussian white noise level. Statistically significant results indicate that high-quality auditory displays coupled with highquality visual displays increase the quality perception of the visual displays relative to the evaluation of the visual display alone, and that low-quality auditory displays coupled with high-quality visual displays decrease the quality perception of the auditory displays relative to the evaluation of the auditory display alone. These findings strongly suggest that the quality of realism in VEs must be a function of both auditory and visual display fidelities inclusive of each other.


Author(s):  
Jason Sterkenburg ◽  
Steven Landry ◽  
Myounghoon Jeon

With the proliferation of technologies operated via in-air hand movements, e.g. virtual/augmented reality, in-vehicle infotainment systems, and large public information displays, there remains an open question about if/how auditory displays can be used effectively to facilitate eyes-free aimed movements. We conducted a within-subjects study, similar to a Fitts paradigm study, in which 24 participants completed simple aimed movements to acquire targets of varying sizes and distances. Participants completed these aimed movements for six conditions – each presenting a unique combination of visual and auditory displays. Results showed participants were generally faster to make selections when using visual displays compared to displays without visuals. However, selection accuracy was similar for auditory-only displays when compared to displays with visual components. These results highlight the potential for auditory displays to aid aimed movements using air gestures in conditions where visual displays are impractical, impossible, or unhelpful.


1995 ◽  
Vol 83 (6) ◽  
pp. 1184-1193 ◽  
Author(s):  
Keerti Gurushanthaiah ◽  
Matthew B. Weinger ◽  
Carl E. Englund

Abstract Background Anesthesiologists use data presented on visual displays to monitor patients' physiologic status. Although studies in nonmedical fields have suggested differential effects on performance among display formats, few studies have examined the effect of display format on anesthesiologist monitoring performance.


1986 ◽  
Vol 30 (7) ◽  
pp. 675-678 ◽  
Author(s):  
Robert G. Eggleston ◽  
Richard A. Chechile ◽  
Rebecca N. Fleischman

An approach for measuring the cognitive complexity of visual display formats is presented. The approach involves modeling both the knowledge that can be extracted from a format and the knowledge an operator brings to a task. A semantic network formalism is developed to capture task-relevant knowledge, from which four orthogonal predictor measures of cognitive complexity are derived. In an experiment, seven different avionic missions, performed with the aid of a horizontal situation display, were studied, and three of the predictor measures were found to correlate significantly with observed task difficulty. The results indicate that a semantic network formalism can be used to produce an objective metric of format quality in terms of cognitive complexity.


2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 798-798
Author(s):  
Mamoun Mardini ◽  
Todd Manini ◽  
Jennifer Schrack

Abstract Continuous, long-term monitoring with remote capabilities using wearable technology is ideal for capturing information about patient/participant symptoms synced to sensor-based information. The Real-time Online Assessment and Mobility Monitor (ROAMM) is a smartwatch framework configured to collect data in free-living settings from both sensor-based (location and movement) and responses to symptom notifications through a visual display. The symposium presents the overall framework and preliminary findings from a demonstration study in older adults with knee osteoarthritis. Karnati will present the general framework of ROAMM explaining the data flow from the smartwatch to end users (clinicians and research). He will highlight components in the design that makes the framework unique and highly flexible to serve different studies with different research questions. Rouzaud evaluated satisfaction, usability and compliance wearing a smartwatch and using the ROAMM app. Participants were compliant to ecological prompts about pain, fatigue and mood three times a day (82.5% compliance rate). Additionally, > 70% reported being satisfied with the function/usability and comfort with using ROAMM and wearing the smartwatch. Mardini examined the temporal relationship between ecological pain and derived life-space mobility features from Global Positioning System coordinates. Results suggested that higher level of knee pain in older adults was associated with lower life-space mobility. Manini examined physician perception towards an electronic health record (EHR) graphical interface of top ranked patient attributes of pain, falls, hydration and mobility patterns. Results indicated a relatively high level of usability of the EHR interface depicting smartwatch data.


2013 ◽  
Vol 8 (2) ◽  
pp. 73 ◽  
Author(s):  
Alexander Refsum Jensenius ◽  
Rolf Inge Godøy

<p class="author">The paper presents sonomotiongram, a technique for the creation of auditory displays of human body motion based on motiongrams. A motiongram is a visual display of motion, based on frame differencing and reduction of a regular video recording. The resultant motiongram shows the spatial shape of the motion as it unfolds in time, somewhat similar to the way in which spectrograms visualise the shape of (musical) sound. The visual similarity of motiongrams and spectrograms is the conceptual starting point for the sonomotiongram technique, which explores how motiongrams can be turned into sound using &ldquo;inverse FFT&rdquo;. The paper presents the idea of shape-sonification, gives an overview of the sonomotiongram technique, and discusses sonification examples of both simple and complex human motion.</p>


1979 ◽  
Vol 23 (1) ◽  
pp. 362-366 ◽  
Author(s):  
Wanda J. Smith

The design of workstations with visual displays has become the subject of considerable interest and concern during the past few years. One area of concern relates to the assumption that long term viewing of such displays at close focal distances may contribute to visual fatigue. A second is the effect on the human visual system of the frequent changes in surface illumination associated with display units used in combination with hard copy documents. As a consequence of these and other concerns, the popular press has published articles that have aroused the interest of various scientific organizations regarding the subject of these effects. This paper discusses a review of some of the literature regarding a limited aspect of this issue, namely the accommodation and pupillary systems as they relate to long term viewing of visual display units.


2021 ◽  
Vol 35 (4) ◽  
Author(s):  
Ryan M. Green ◽  
Myia L. Graves ◽  
Carrie M. Edwards ◽  
Edward P. Hebert ◽  
Daniel B. Hollander

Physical activity enhances physical health, reduces disease, and resists metabolic syndrome and obesity, while sitting for extended periods of time has a negative effect on long term health outcomes. Thus, reducing sitting time has been identified as a health-enhancing goal. The purpose of this study was to explore perceptions and responses of college students to sitting versus standing in class. Five standing desks were placed in a classroom of traditional sitting desks. In a counterbalanced, within subjects design, 88 undergraduate students (age M=21.64, SD=6.55 years) participated in the study. Some participating students first stood at a desk for three consecutive class meetings and then sat for three classes while others sat for three consecutive classes and then stood for three. Surveys were administered at the beginning and end of each class and at the end of six consecutive class sessions. Results indicated that mood was signifi- cantly higher on standing than sitting days, the majority of participants had a favorable perception of the standing-in-class experience, and would use standing stations if the option was available. This study is one of few to examine the viability and response to adding standing desks in college classrooms, and indicates standing desks may be perceived favorably and could be utilized to reduce sitting time.


Author(s):  
Myounghoon Jeon

While design theories in visual displays have been well developed and further refined, relatively little research has been conducted on design theories and models in auditory displays. The existing discussions mainly account for functional mappings between sounds and referents, but these do not fully address design aspects of auditory displays. To bridge the gap, the present proposal focuses on design affordances in sound design among many design constructs. To this end, the definition and components of design affordances are briefly explored, followed by the auditory display examples of those components to gauge whether sound can deliver perceived affordances in interactive products. Finally, other design constructs, such as feedback and signifier, are discussed together with future work. This exploratory proposal is expected to contribute to elaborating sound design theory and practice.


Sign in / Sign up

Export Citation Format

Share Document