Auditory Displays: If They are so Useful, Why are they Turned Off?

Author(s):  
Robert A. King ◽  
Gregory M. Corso

Pilots often turn off the auditory displays which are provided to improve their performance (Weiner, 1977; Veitengruber, Boucek, & Smith, 1977). The intensity of the auditory display is often cited as a possible cause of this behavior (Cooper, 1977). However, the processing of the additional information is a concurrent task demand which may increase subjective workload (Wickens & Yeh, 1983; McCloy, Derrick, & Wickens, 1983). Pilots may attempt to reduce subjective workload at the expense of performance by turning off the auditory display. Forty undergraduate males performed a visual search task. Three conditions: auditory display on, auditory display off, and subject's choice were run in combination with nine levels of visual display load. The auditory display, a 4000 Hz tone with a between-subject intensity of 60 dB(A), 70 dB(A), 80 dB(A), and 90 dB(A), indicated that the target letter was in the lower half of the search area. NASA-TLX (Task Load Index) was used to measure the subjective workload of the subjects after each block of trials (Hart & Staveland, 1988). A non-monotonic relationship was found between auditory display intensity and auditory display usage. Evidence was found that the auditory display increased some aspects of subjective workload– physical demands and frustration. Furthermore, there was a dissociation of performance and subjective workload in the manner predicted by Wickens – Yeh (1983). The implications of these results for display design are discussed.

Author(s):  
Keenan R. May ◽  
Briana Sobel ◽  
Jeff Wilson ◽  
Bruce N. Walker

In both extreme and everyday situations, humans need to find nearby objects that cannot be located visually. In such situations, auditory display technology could be used to display information supporting object targeting. Unfortunately, spatial audio inadequately conveys sound source elevation, which is crucial for locating objects in 3D space. To address this, three auditory display concepts were developed and evaluated in the context of finding objects within a virtual room, in either low or no visibility conditions: (1) a one-time height-denoting “area cue,” (2) ongoing “proximity feedback,” or (3) both. All three led to improvements in performance and subjective workload compared to no sound. Displays (2) and (3) led to the largest improvements. This pattern was smaller, but still present, when visibility was low, compared to no visibility. These results indicate that persons who need to locate nearby objects in limited visibility conditions could benefit from the types of auditory displays considered here.


2000 ◽  
Vol 9 (6) ◽  
pp. 557-580 ◽  
Author(s):  
Russell L. Storms ◽  
Michael J. Zyda

The quality of realism in virtual environments (VEs) is typically considered to be a function of visual and audio fidelity mutually exclusive of each other. However, the VE participant, being human, is multimodal by nature. Therefore, in order to validate more accurately the levels of auditory and visual fidelity that are required in a virtual environment, a better understanding is needed of the intersensory or crossmodal effects between the auditory and visual sense modalities. To identify whether any pertinent auditory-visual cross-modal perception phenomena exist, 108 subjects participated in three experiments which were completely automated using HTML, Java, and JavaScript programming languages. Visual and auditory display quality perceptions were measured intraand intermodally by manipulating the pixel resolution of the visual display and Gaussian white noise level, and by manipulating the sampling frequency of the auditory display and Gaussian white noise level. Statistically significant results indicate that high-quality auditory displays coupled with highquality visual displays increase the quality perception of the visual displays relative to the evaluation of the visual display alone, and that low-quality auditory displays coupled with high-quality visual displays decrease the quality perception of the auditory displays relative to the evaluation of the auditory display alone. These findings strongly suggest that the quality of realism in VEs must be a function of both auditory and visual display fidelities inclusive of each other.


2014 ◽  
Vol 19 (1) ◽  
pp. 70-77 ◽  
Author(s):  
Stephen Roddy ◽  
Dermot Furlong

Aesthetics are gaining increasing recognition as an important topic in auditory display. This article looks to embodied cognition to provide an aesthetic framework for auditory display design. It calls for a serious rethinking of the relationship between aesthetics and meaning-making in order to tackle the mapping problem which has resulted from historically positivistic and disembodied approaches within the field. Arguments for an embodied aesthetic framework are presented. An early example is considered and suggestions for further research on the road to an embodied aesthetics are proposed. Finally a closing discussion considers the merits of this approach to solving the mapping problem and designing more intuitively meaningful auditory displays.


1970 ◽  
Vol 30 (1) ◽  
pp. 235-238 ◽  
Author(s):  
Thomas N. Jones ◽  
Roger E. Kirk

This experiment was designed to compare the monitoring performance of Ss using a visual display with the performance of Ss using an auditory display. 24 Ss were randomly assigned to monitor either the visual or the auditory display for a 3-hr. period. Two measures of performance, reaction time and probability of responding, were obtained during the monitoring session. An analysis of the results indicates that Ss who monitored the auditory display had shorter reaction times, higher probability of responding, and less variability than Ss who monitored the visual display.


Author(s):  
Joseph K. Nuamah ◽  
Younho Seong

Psychophysiological measures can be used to determine whether a particular display produces a general difference in brain function. Such information might be valuable in efforts to improve usability in display design. In this preliminary study, we aimed to use the electroencephalography (EEG) task load index (TLI), given by the ratio of mean frontal midline theta energy to mean parietal alpha energy, to provide insight into the mental effort required by participants performing intuition-inducing and analysis-inducing tasks. We employed behavioral measures (reaction time and percent correct), and a subjective measure (NASA-Task Load Index) to validate the objective measure (TLI). The results we obtained were consistent with our hypothesis that mental effort required for analysis-inducing tasks would be different from that required for intuition-inducing tasks. Although our sample size was small, we were able to obtain a significant positive correlation between NASA-Task Load Index and TLI.


Author(s):  
W. Peter Colquhoun

Using a task which closely simulated the actual output from a sonar device, the performance of 12 subjects was observed for a total of 115 hr in repeated prolonged monitoring sessions under auditory, visual, and dual-mode display conditions. Despite an underlying basic superiority of signal discriminability on the visual display, and the occurrence of long-term practice effects, detection rate was consistently and substantially higher under the auditory condition, and higher again with the dual-mode display. These results are similar to those obtained by earlier workers using artificial laboratory tasks for shorter periods, and are consistent with the notion that auditory displays have greater attention-gaining capacity in a “vigilance” situation. A further comparison of the auditory and visual displays was then made in an “alerted” situation, where the possible occurrence of a signal was indicated by a warning stimulus in the alternative sensory mode. Ten subjects were observed for a total of 57 hr in these conditions, under which performance was found to be clearly superior with the visual display. Cross-modal correlations of performance indicated the presence of a common factor of signal detectability within subjects. It was concluded that where efficiency both in the initial detection of targets and their subsequent identification and tracking are equally important, the best solution would seem to be to retain both auditory and visual displays and to ensure that these are monitored concurrently.


Author(s):  
James A. Ballas

Bernie Krause has hypothesized that “each creature appears to have its own sonic niche (channel, or space) in the frequency spectrum and/or time slot occupied by no other at that particular moment.” (Krause, 1987). The implication of this hypothesis is that good sound design should produce sounds that have unique spectral properties for a particular context. The semantics of the context also needs to be considered. However, this principal is difficult to satisfy because inexpensive sound generating devices have very limited (and primitive) audio capability.


2020 ◽  
Vol 19 (6) ◽  
pp. 38-49
Author(s):  
D. M. Kuz’min ◽  
◽  
A. A. Fedotova ◽  

The main priority of middle ear surgery is to create a safe and optimal view of the surgical field, as well as the most accurate visualization of anatomical structures, which is a driving factor in the evolution of otosurgery. The additional information provided by three-dimensional (3D) images has been proven to improve understanding of the temporal bone anatomy and improve the operator’s ability to assess associated diseases, thereby optimizing surgical management. In the presented experimental research work, a new technique for visualizing the surgical field is described, which improves the quality of the operator’s work and expands the possibilities of middle ear surgery. On the basis of the Chair of Otorhinolaryngology of the Mechnikov North-Western State Medical University a remote adapter for an endoscopic tube was created, which allows you to broadcast the video image received from its distal end to virtual reality glasses. For a detailed understanding of the principle of information transmission in a new three-dimensional reality, we used concepts such as disparity and stereopsis. All research results were evaluated according to the NASA Task Load Index scale. Analyzing the results of the experiment, in the conditions of three-dimensional visualization of the surgical field, a lower level of subjective workload was revealed, which was regarded as a positive effect of the realization of the phenomenon of stereopsis, when performing manipulations on the middle ear.


2013 ◽  
Vol 8 (2) ◽  
pp. 73 ◽  
Author(s):  
Alexander Refsum Jensenius ◽  
Rolf Inge Godøy

<p class="author">The paper presents sonomotiongram, a technique for the creation of auditory displays of human body motion based on motiongrams. A motiongram is a visual display of motion, based on frame differencing and reduction of a regular video recording. The resultant motiongram shows the spatial shape of the motion as it unfolds in time, somewhat similar to the way in which spectrograms visualise the shape of (musical) sound. The visual similarity of motiongrams and spectrograms is the conceptual starting point for the sonomotiongram technique, which explores how motiongrams can be turned into sound using &ldquo;inverse FFT&rdquo;. The paper presents the idea of shape-sonification, gives an overview of the sonomotiongram technique, and discusses sonification examples of both simple and complex human motion.</p>


Sign in / Sign up

Export Citation Format

Share Document