scholarly journals Localization Performance in a Binaural Real-Time Auralization System Extended to Research Hearing Aids

2020 ◽  
Vol 24 ◽  
pp. 233121652090870 ◽  
Author(s):  
Florian Pausch ◽  
Janina Fels

Auralization systems for auditory research should ideally be validated by perceptual experiments, as well as objective measures. This study employed perceptual tests to evaluate a recently proposed binaural real-time auralization system for hearing aid (HA) users. The dynamic localization of real sound sources was compared with that of virtualized ones, reproduced binaurally over headphones, loudspeakers with crosstalk cancellation (CTC) filters, research HAs, or combined via loudspeakers with CTC filters and research HAs under free-field conditions. System-inherent properties affecting localization cues were identified and their effects on overall horizontal localization, reversal rates, and angular error metrics were assessed. The general localization performance in combined reproduction was found to fall between what was measured for loudspeakers with CTC filters and research HAs alone. Reproduction via research HAs alone resulted in the highest reversal rates and angular errors. While combined reproduction helped decrease the reversal rates, no significant effect was observed on the angular error metrics. However, combined reproduction resulted in the same overall horizontal source localization performance as measured for real sound sources, while improving localization compared with reproduction over research HAs alone. Collectively, the results with respect to combined reproduction can be considered a performance indicator for future experiments involving HA users.

2019 ◽  
Vol 23 ◽  
pp. 233121651984733 ◽  
Author(s):  
Sebastian A. Ausili ◽  
Bradford Backus ◽  
Martijn J. H. Agterberg ◽  
A. John van Opstal ◽  
Marc M. van Wanrooij

Bilateral cochlear-implant (CI) users and single-sided deaf listeners with a CI are less effective at localizing sounds than normal-hearing (NH) listeners. This performance gap is due to the degradation of binaural and monaural sound localization cues, caused by a combination of device-related and patient-related issues. In this study, we targeted the device-related issues by measuring sound localization performance of 11 NH listeners, listening to free-field stimuli processed by a real-time CI vocoder. The use of a real-time vocoder is a new approach, which enables testing in a free-field environment. For the NH listening condition, all listeners accurately and precisely localized sounds according to a linear stimulus–response relationship with an optimal gain and a minimal bias both in the azimuth and in the elevation directions. In contrast, when listening with bilateral real-time vocoders, listeners tended to orient either to the left or to the right in azimuth and were unable to determine sound source elevation. When listening with an NH ear and a unilateral vocoder, localization was impoverished on the vocoder side but improved toward the NH side. Localization performance was also reflected by systematic variations in reaction times across listening conditions. We conclude that perturbation of interaural temporal cues, reduction of interaural level cues, and removal of spectral pinna cues by the vocoder impairs sound localization. Listeners seem to ignore cues that were made unreliable by the vocoder, leading to acute reweighting of available localization cues. We discuss how current CI processors prevent CI users from localizing sounds in everyday environments.


Author(s):  
K. A. McMullen ◽  
Gregory H. Wakefield

Although static localization performance in auditory displays is known to substantially improve as a listener spends more time in the environment, the impact of real-time interactive movement on these tasks is not yet well understood. Accordingly, a training procedure was developed and evaluated to address this question. In a set of experiments, listeners searched for and marked the locations of five virtually spatialized sound sources. The task was performed with and without training. Finally, the listeners performed a second search and mark task to assess the impacts of training. The results indicate that the training procedure maintained or significantly improved localization accuracy. In addition, localization performance did not improve for listeners who did not complete the training procedure.


Author(s):  
Na Zhu ◽  
Sean Wu

This paper presents a methodology for tracking and tracing multiple incoherent sound sources in 3D space in real time. A salient feature of this methodology is its capability of handling all types of sound signals, including broadband, narrowband, continuous, impulsive, and tonal (sinusoidal) sounds over the audible frequency range (20 to 20,000 Hz). Locations of sound sources are indicated in terms of the Cartesian coordinates in real time. The target sources are viewed through an automatic tracking camera covering 350 degree solid angle. The hardware includes four microphones, a thermometer, a webcam, a five-channel signal conditioner and a laptop. Thus, the system can be made light, portable, easy to setup and use and inexpensive. The underlying algorithm is a hybrid approach consisting of modeling of sound radiation from a point source in a free field, triangulation and signal processing techniques. To acquire better understanding of the performance of the device, numerical simulations are conducted to study the impacts of signal noise ratio, microphone spacing, source distance and frequency on the spatial resolution and accuracy of the results. Experiments are carried out to validate results over a wide variety of real-world sound signals such as helicopter noise, human conversations, truck pass-by noise, gun shots, impact sounds, clapping, coughing, etc. Satisfactory results are obtained in most cases, even when a source is behind the measurement microphones.


1988 ◽  
Vol 31 (2) ◽  
pp. 156-165 ◽  
Author(s):  
P. A. Busby ◽  
Y. C. Tong ◽  
G. M. Clark

The identification of consonants in a/-C-/a/nonsense syllables, using a fourteen-alternative forced-choice procedure, was examined in 4 profoundly hearing-impaired children under five conditions: audition alone using hearing aids in free-field (A),vision alone (V), auditory-visual using hearing aids in free-field (AV1), auditory-visual with linear amplification (AV2), and auditory-visual with syllabic compression (AV3). In the AV2 and AV3 conditions, acoustic signals were binaurally presented by magnetic or acoustic coupling to the subjects' hearing aids. The syllabic compressor had a compression ratio of 10:1, and attack and release times were 1.2 ms and 60 ms. The confusion matrices were subjected to two analysis methods: hierarchical clustering and information transmission analysis using articulatory features. The same general conclusions were drawn on the basis of results obtained from either analysis method. The results indicated better performance in the V condition than in the A condition. In the three AV conditions, the subjects predominately combined the acoustic parameter of voicing with the visual signal. No consistent differences were recorded across the three AV conditions. Syllabic compression did not, therefore, appear to have a significant influence on AV perception for these children. A high degree of subject variability was recorded for the A and three AV conditions, but not for the V condition.


1999 ◽  
Vol 58 (3) ◽  
pp. 170-179 ◽  
Author(s):  
Barbara S. Muller ◽  
Pierre Bovet

Twelve blindfolded subjects localized two different pure tones, randomly played by eight sound sources in the horizontal plane. Either subjects could get information supplied by their pinnae (external ear) and their head movements or not. We found that pinnae, as well as head movements, had a marked influence on auditory localization performance with this type of sound. Effects of pinnae and head movements seemed to be additive; the absence of one or the other factor provoked the same loss of localization accuracy and even much the same error pattern. Head movement analysis showed that subjects turn their face towards the emitting sound source, except for sources exactly in the front or exactly in the rear, which are identified by turning the head to both sides. The head movement amplitude increased smoothly as the sound source moved from the anterior to the posterior quadrant.


2020 ◽  
Vol 8 (1) ◽  
pp. 91
Author(s):  
Imam Teguh Islamy ◽  
Hanim Maria Astuti ◽  
Radityo Prasetianto Wibowo

DDalam menjalankan fungsi sebagai penilai kinerja pegawai di ITS, Direktorat Sumber Daya Manusia dan Organisasi (DSDMO) ITS masih menggunakan bentuk penilaian Skala Likertz dalam menilai pencapaian terhadap rincian tugas yang dilmiliki oleh pranata komputer. Hal ini memunculkan permasalahan dalam penentuan penilaian kinerja pegawai dalam hal ini pranata komputer yang masih memiliki tingkat subjektif yang tinggi. Hal ini dapat mempengaruhi penilaian kinerja yang diberikan kepada pranata komputer. Untuk mengurangi tingkat subjektif terhadap penilaian kinerja pranata komputer, dibutuhkan sebuah pengukuran kinerja yang berbasis Key Performance Indicator sehingga kinerja dari pranata komputer dapat diukur secara objektif. Selain itu, dibutuhkan sebuah sistem terintegrasi dalam proses pelaporan kinerja pranata komputer agar kinerja dari pranata komputer dapat dipantau secara real-time dan dapat mengetahui tingkat pencapaian kinerja dari pranata komputer.Kata-Kata Kunci: Pranata komputer, Kinerja, Key Performance Indicator, Sistem Pelaporan Kinerja, Dash­board.


2021 ◽  
pp. 147592172199621
Author(s):  
Enrico Tubaldi ◽  
Ekin Ozer ◽  
John Douglas ◽  
Pierre Gehl

This study proposes a probabilistic framework for near real-time seismic damage assessment that exploits heterogeneous sources of information about the seismic input and the structural response to the earthquake. A Bayesian network is built to describe the relationship between the various random variables that play a role in the seismic damage assessment, ranging from those describing the seismic source (magnitude and location) to those describing the structural performance (drifts and accelerations) as well as relevant damage and loss measures. The a priori estimate of the damage, based on information about the seismic source, is updated by performing Bayesian inference using the information from multiple data sources such as free-field seismic stations, global positioning system receivers and structure-mounted accelerometers. A bridge model is considered to illustrate the application of the framework, and the uncertainty reduction stemming from sensor data is demonstrated by comparing prior and posterior statistical distributions. Two measures are used to quantify the added value of information from the observations, based on the concepts of pre-posterior variance and relative entropy reduction. The results shed light on the effectiveness of the various sources of information for the evaluation of the response, damage and losses of the considered bridge and on the benefit of data fusion from all considered sources.


2020 ◽  
Vol 10 (24) ◽  
pp. 9154
Author(s):  
Paula Morella ◽  
María Pilar Lambán ◽  
Jesús Royo ◽  
Juan Carlos Sánchez ◽  
Jaime Latapia

The purpose of this work is to develop a new Key Performance Indicator (KPI) that can quantify the cost of Six Big Losses developed by Nakajima and implements it in a Cyber Physical System (CPS), achieving a real-time monitorization of the KPI. This paper follows the methodology explained below. A cost model has been used to accurately develop this indicator together with the Six Big Losses description. At the same time, the machine tool has been integrated into a CPS, enhancing the real-time data acquisition, using the Industry 4.0 technologies. Once the KPI has been defined, we have developed the software that can turn these real-time data into relevant information (using Python) through the calculation of our indicator. Finally, we have carried out a case of study showing our new KPI results and comparing them to other indicators related with the Six Big Losses but in different dimensions. As a result, our research quantifies economically the Six Big Losses, enhances the detection of the bigger ones to improve them, and enlightens the importance of paying attention to different dimensions, mainly, the productive, sustainable, and economic at the same time.


2013 ◽  
Author(s):  
Alan W. Boyd ◽  
William M. Whitmer ◽  
W. Owen Brimijoin ◽  
Michael A. Akeroyd

2021 ◽  
Vol 2 ◽  
Author(s):  
Thirsa Huisman ◽  
Axel Ahrens ◽  
Ewen MacDonald

To reproduce realistic audio-visual scenarios in the laboratory, Ambisonics is often used to reproduce a sound field over loudspeakers and virtual reality (VR) glasses are used to present visual information. Both technologies have been shown to be suitable for research. However, the combination of both technologies, Ambisonics and VR glasses, might affect the spatial cues for auditory localization and thus, the localization percept. Here, we investigated how VR glasses affect the localization of virtual sound sources on the horizontal plane produced using either 1st-, 3rd-, 5th- or 11th-order Ambisonics with and without visual information. Results showed that with 1st-order Ambisonics the localization error is larger than with the higher orders, while the differences across the higher orders were small. The physical presence of the VR glasses without visual information increased the perceived lateralization of the auditory stimuli by on average about 2°, especially in the right hemisphere. Presenting visual information about the environment and potential sound sources did reduce this HMD-induced shift, however it could not fully compensate for it. While the localization performance itself was affected by the Ambisonics order, there was no interaction between the Ambisonics order and the effect of the HMD. Thus, the presence of VR glasses can alter acoustic localization when using Ambisonics sound reproduction, but visual information can compensate for most of the effects. As such, most use cases for VR will be unaffected by these shifts in the perceived location of the auditory stimuli.


Sign in / Sign up

Export Citation Format

Share Document