interaural level differences
Recently Published Documents


TOTAL DOCUMENTS

99
(FIVE YEARS 25)

H-INDEX

20
(FIVE YEARS 2)

Author(s):  
Sławomir K. Zieliński ◽  
Paweł Antoniuk ◽  
Hyunkook Lee ◽  
Dale Johnson

AbstractOne of the greatest challenges in the development of binaural machine audition systems is the disambiguation between front and back audio sources, particularly in complex spatial audio scenes. The goal of this work was to develop a method for discriminating between front and back located ensembles in binaural recordings of music. To this end, 22, 496 binaural excerpts, representing either front or back located ensembles, were synthesized by convolving multi-track music recordings with 74 sets of head-related transfer functions (HRTF). The discrimination method was developed based on the traditional approach, involving hand-engineering of features, as well as using a deep learning technique incorporating the convolutional neural network (CNN). According to the results obtained under HRTF-dependent test conditions, CNN showed a very high discrimination accuracy (99.4%), slightly outperforming the traditional method. However, under the HRTF-independent test scenario, CNN performed worse than the traditional algorithm, highlighting the importance of testing the algorithms under HRTF-independent conditions and indicating that the traditional method might be more generalizable than CNN. A minimum of 20 HRTFs are required to achieve a satisfactory generalization performance for the traditional algorithm and 30 HRTFs for CNN. The minimum duration of audio excerpts required by both the traditional and CNN-based methods was assessed as 3 s. Feature importance analysis, based on a gradient attribution mapping technique, revealed that for both the traditional and the deep learning methods, a frequency band between 5 and 6 kHz is particularly important in terms of the discrimination between front and back ensemble locations. Linear-frequency cepstral coefficients, interaural level differences, and audio bandwidth were identified as the key descriptors facilitating the discrimination process using the traditional approach.


2021 ◽  
Vol 9 ◽  
Author(s):  
Andrew C. Mason

Insects are often small relative to the wavelengths of sounds they need to localize, which presents a fundamental biophysical problem. Understanding novel solutions to this limitation can provide insights for biomimetic technologies. Such an approach has been successful using the fly Ormia ochracea (Diptera: Tachinidae) as a model. O. ochracea is a parasitoid species whose larvae develop as internal parasites within crickets (Gryllidae). In nature, female flies find singing male crickets by phonotaxis, despite severe constraints on directional hearing due to their small size. A physical coupling between the two tympanal membranes allows the flies to obtain information about sound source direction with high accuracy because it generates interaural time-differences (ITD) and interaural level differences (ILD) in tympanal vibrations that are exaggerated relative to the small arrival-time difference at the two ears, that is the only cue available in the sound stimulus. In this study, I demonstrate that pure time-differences in the neural responses to sound stimuli are sufficient for auditory directionality in O. ochracea.


Author(s):  
M. Torben Pastore ◽  
Kathryn R. Pulling ◽  
Chen Chen ◽  
William A. Yost ◽  
Michael F. Dorman

Purpose For bilaterally implanted patients, the automatic gain control (AGC) in both left and right cochlear implant (CI) processors is usually neither linked nor synchronized. At high AGC compression ratios, this lack of coordination between the two processors can distort interaural level differences, the only useful interaural difference cue available to CI patients. This study assessed the improvement, if any, in the utility of interaural level differences for sound source localization in the frontal hemifield when AGCs were synchronized versus independent and when listeners were stationary versus allowed to move their heads. Method Sound source identification of broadband noise stimuli was tested for seven bilateral CI patients using 13 loudspeakers in the frontal hemifield, under conditions where AGCs were linked and unlinked. For half the conditions, patients remained stationary; in the other half, they were encouraged to rotate or reorient their heads within a range of approximately ± 30° during sound presentation. Results In general, those listeners who already localized reasonably well with independent AGCs gained the least from AGC synchronization, perhaps because there was less room for improvement. Those listeners who performed worst with independent AGCs gained the most from synchronization. All listeners performed as well or better with synchronization than without; however, intersubject variability was high. Head movements had little impact on the effectiveness of synchronization of AGCs. Conclusion Synchronization of AGCs offers one promising strategy for improving localization performance in the frontal hemifield for bilaterally implanted CI patients. Supplemental Material https://doi.org/10.23641/asha.14681412


2021 ◽  
Author(s):  
Alan Archer-Boyd ◽  
Robert P. Carlyon

We simulated the effect of several automatic gain control (AGC) and AGC-like systems and head movement on the output levels, and resulting interaural level differences (ILDs) produced by bilateral cochlear-implant (CI) processors. The simulated AGC systems included unlinked AGCs with a range of parameter settings, linked AGCs, and two proprietary multi-channel systems used in contemporary CIs. The results show that over the range of values used clinically, the parameters that most strongly affect dynamic ILDs are the release time and compression ratio. Linking AGCs preserves ILDs at the expense of monaural level changes and, possibly, comfortable listening level. Multichannel AGCs can whiten output spectra, and/or distort the dynamic changes in ILD that occur during and after head movement. We propose that an unlinked compressor with a ratio of approximately 3:1 and a release time of 300-500 ms can preserve the shape of dynamic ILDs, without causing large spectral distortions or sacrificing listening comfort.


Acta Acustica ◽  
2021 ◽  
Vol 5 ◽  
pp. 10
Author(s):  
Johannes M. Arend ◽  
Heinrich R. Liesefeld ◽  
Christoph Pörschmann

Nearby sound sources provide distinct binaural cues, mainly in the form of interaural level differences, which vary with respect to distance and azimuth. However, there is a long-standing controversy regarding whether humans can actually utilize binaural cues for distance estimation of nearby sources. Therefore, we conducted three experiments using non-individual binaural synthesis. In Experiment 1, subjects had to estimate the relative distance of loudness-normalized and non-normalized nearby sources in static and dynamic binaural rendering in a multi-stimulus comparison task under anechoic conditions. Loudness normalization was used as a plausible method to compensate for noticeable intensity differences between stimuli. With the employed loudness normalization, nominal distance did not significantly affect distance ratings for most conditions despite the presence of non-individual binaural distance cues. In Experiment 2, subjects had to judge the relative distance between loudness-normalized sources in dynamic binaural rendering in a forced-choice task. Below chance performance in this more sensitive task revealed that the employed loudness normalization strongly affected distance estimation. As this finding indicated a general issue with loudness normalization for studies on relative distance estimation, Experiment 3 directly tested the validity of loudness normalization and a frequently used amplitude normalization. Results showed that both normalization methods lead to remaining (incorrect) intensity cues, which subjects most likely used for relative distance estimation. The experiments revealed that both examined normalization methods have consequential drawbacks. These drawbacks might in parts explain conflicting findings regarding the effectiveness of binaural cues for relative distance estimation in the literature.


2021 ◽  
Vol 25 ◽  
pp. 233121652110181
Author(s):  
Taylor A. Bakal ◽  
Kristina DeRoy Milvae ◽  
Chen Chen ◽  
Matthew J. Goupell

Speech understanding in noise is poorer in bilateral cochlear-implant (BICI) users compared to normal-hearing counterparts. Independent automatic gain controls (AGCs) may contribute to this because adjusting processor gain independently can reduce interaural level differences that BICI listeners rely on for bilateral benefits. Bilaterally linked AGCs may improve bilateral benefits by increasing the magnitude of interaural level differences. The effects of linked AGCs on bilateral benefits (summation, head shadow, and squelch) were measured in nine BICI users. Speech understanding for a target talker at 0° masked by a single talker at 0°, 90°, or −90° azimuth was assessed under headphones with sentences at five target-to-masker ratios. Research processors were used to manipulate AGC type (independent or linked) and test ear (left, right, or both). Sentence recall was measured in quiet to quantify individual interaural asymmetry in functional performance. The results showed that AGC type did not significantly change performance or bilateral benefits. Interaural functional asymmetries, however, interacted with ear such that greater summation and squelch benefit occurred when there was larger functional asymmetry, and interacted with interferer location such that smaller head shadow benefit occurred when there was larger functional asymmetry. The larger benefits for those with larger asymmetry were driven by improvements from adding a better-performing ear, rather than a true binaural-hearing benefit. In summary, linked AGCs did not significantly change bilateral benefits in cases of speech-on-speech masking with a single-talker masker, but there was also no strong detriment across a range of target-to-masker ratios, within a small and diverse BICI listener population.


2021 ◽  
Vol 25 ◽  
pp. 233121652110304
Author(s):  
William O. Gray ◽  
Paul G. Mayo ◽  
Matthew J. Goupell ◽  
Andrew D. Brown

Acoustic hearing listeners use binaural cues—interaural time differences (ITDs) and interaural level differences (ILDs)—for localization and segregation of sound sources in the horizontal plane. Cochlear implant users now often receive two implants (bilateral cochlear implants [BiCIs]) rather than one, with the goal to provide access to these cues. However, BiCI listeners often experience difficulty with binaural tasks. Most BiCIs use independent sound processors at each ear; it has often been suggested that such independence may degrade the transmission of binaural cues, particularly ITDs. Here, we report empirical measurements of binaural cue transmission via BiCIs implementing a common “ n-of- m” spectral peak-picking stimulation strategy. Measurements were completed for speech and nonspeech stimuli presented to an acoustic manikin “fitted” with BiCI sound processors. Electric outputs from the BiCIs and acoustic outputs from the manikin’s in-ear microphones were recorded simultaneously, enabling comparison of electric and acoustic binaural cues. For source locations away from the midline, BiCI binaural cues, particularly envelope ITD cues, were found to be degraded by asymmetric spectral peak-picking. In addition, pulse amplitude saturation due to nonlinear level mapping yielded smaller ILDs at higher presentation levels. Finally, while individual pulses conveyed a spurious “drifting” ITD, consistent with independent left and right processor clocks, such variation was not evident in transmitted envelope ITDs. Results point to avenues for improvement of BiCI technology and may prove useful in the interpretation of BiCI spatial hearing outcomes reported in prior and future studies.


Sign in / Sign up

Export Citation Format

Share Document