scholarly journals Pinna-Imitating Microphone Directionality Improves Sound Localization and Discrimination in Bilateral Cochlear Implant Users

Author(s):  
Tim Fischer ◽  
Christoph Schmid ◽  
Martin Kompis ◽  
Georgios Mantokoudis ◽  
Marco Caversaccio ◽  
...  

AbstractObjectivesTo compare the sound-source localization, discrimination and tracking performance of bilateral cochlear implant users with omnidirectional (OMNI) and pinna-imitating (PI) microphone directionality modes.DesignTwelve experienced bilateral cochlear implant users participated in the study. Their audio processors were fitted with two different programs featuring either the OMNI or PI mode. Each subject performed static and dynamic sound field spatial hearing tests in the horizontal plane. The static tests consisted of an absolute sound localization test and a minimum audible angle (MAA) test, which was measured at 8 azimuth directions. Dynamic sound tracking ability was evaluated by the subject correctly indicating the direction of a moving stimulus along two circular paths around the subject.ResultsPI mode led to statistically significant sound localization and discrimination improvements. For static sound localization, the greatest benefit was a reduction in the number of front-back confusions. The front-back confusion rate was reduced from 47% with OMNI mode to 35% with PI mode (p = 0.03). The ability to discriminate sound sources at the sides was only possible with PI mode. The MAA value for the sides decreased from a 75.5 to a 37.7-degree angle when PI mode was used (p < 0.001). Furthermore, a non-significant trend towards an improvement in the ability to track sound sources was observed for both trajectories tested (p = 0.34 and p = 0.27).ConclusionsOur results demonstrate that PI mode can lead to improved spatial hearing performance in bilateral cochlear implant users, mainly as a consequence of improved front-back discrimination with PI mode.

Author(s):  
Snandan Sharma ◽  
Waldo Nogueira ◽  
A. John van Opstal ◽  
Josef Chalupper ◽  
Lucas H. M. Mens ◽  
...  

Purpose Speech understanding in noise and horizontal sound localization is poor in most cochlear implant (CI) users with a hearing aid (bimodal stimulation). This study investigated the effect of static and less-extreme adaptive frequency compression in hearing aids on spatial hearing. By means of frequency compression, we aimed to restore high-frequency audibility, and thus improve sound localization and spatial speech recognition. Method Sound-detection thresholds, sound localization, and spatial speech recognition were measured in eight bimodal CI users, with and without frequency compression. We tested two compression algorithms: a static algorithm, which compressed frequencies beyond the compression knee point (160 or 480 Hz), and an adaptive algorithm, which aimed to compress only consonants leaving vowels unaffected (adaptive knee-point frequencies from 736 to 2946 Hz). Results Compression yielded a strong audibility benefit (high-frequency thresholds improved by 40 and 24 dB for static and adaptive compression, respectively), no meaningful improvement in localization performance (errors remained > 30 deg), and spatial speech recognition across all participants. Localization biases without compression (toward the hearing-aid and implant side for low- and high-frequency sounds, respectively) disappeared or reversed with compression. The audibility benefits provided to each bimodal user partially explained any individual improvements in localization performance; shifts in bias; and, for six out of eight participants, benefits in spatial speech recognition. Conclusions We speculate that limiting factors such as a persistent hearing asymmetry and mismatch in spectral overlap prevent compression in bimodal users from improving sound localization. Therefore, the benefit in spatial release from masking by compression is likely due to a shift of attention to the ear with the better signal-to-noise ratio facilitated by compression, rather than an improved spatial selectivity. Supplemental Material https://doi.org/10.23641/asha.16869485


2020 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Tim Fischer ◽  
Christoph Schmid ◽  
Martin Kompis ◽  
Georgios Mantokoudis ◽  
Marco Caversaccio ◽  
...  

2020 ◽  
Vol 11 ◽  
Author(s):  
Sebastián A. Ausili ◽  
Martijn J. H. Agterberg ◽  
Andreas Engel ◽  
Christiane Voelter ◽  
Jan Peter Thomas ◽  
...  

2015 ◽  
Vol 39 (1) ◽  
pp. 81-88 ◽  
Author(s):  
Daniel Fernández Comesana ◽  
Keith R. Holland ◽  
Dolores García Escribano ◽  
Hans-Elias de Bree

Abstract Sound localization problems are usually tackled by the acquisition of data from phased microphone arrays and the application of acoustic holography or beamforming algorithms. However, the number of sensors required to achieve reliable results is often prohibitive, particularly if the frequency range of interest is wide. It is shown that the number of sensors required can be reduced dramatically providing the sound field is time stationary. The use of scanning techniques such as “Scan & Paint” allows for the gathering of data across a sound field in a fast and efficient way, using a single sensor and webcam only. It is also possible to characterize the relative phase field by including an additional static microphone during the acquisition process. This paper presents the theoretical and experimental basis of the proposed method to localise sound sources using only one fixed microphone and one moving acoustic sensor. The accuracy and resolution of the method have been proven to be comparable to large microphone arrays, thus constituting the so called “virtual phased arrays”.


2019 ◽  
Author(s):  
T. Fischer ◽  
M. Kompis ◽  
G. Mantokoudis ◽  
M. Caversaccio ◽  
W. Wimmer

ABSTRACTAlthough spatial hearing is of great importance in everyday life, today’s routine audiological test batteries and static test setups assess sound localization, discrimination and tracking abilities rudimentarily and thus provide only a limited interpretation of treatment outcomes regarding spatial hearing performance. To address this limitation, we designed a dynamic sound field test setup and evaluated the sound localization, discrimination and tracking performance of 12 normal-hearing subjects. During testing, participants provided feedback either through a touchpad or through eye tracking. In addition, the influence of head movement on sound-tracking performance was investigated. Our results show that tracking and discrimination performance was significantly better in the frontal azimuth than in the dorsal azimuth. Particularly good performance was observed in the backward direction across localization, discrimination and tracking tests. As expected, free head movement improved sound-tracking abilities. Furthermore, feedback via gaze detection led to larger tracking errors than feedback via the touchpad. We found statistically significant correlations between the static and dynamic tests, which favor the snapshot theory for auditory motion perception.


Author(s):  
Maike Klingel ◽  
Bernhard Laback

AbstractNormal-hearing (NH) listeners rely on two binaural cues, the interaural time (ITD) and level difference (ILD), for azimuthal sound localization. Cochlear-implant (CI) listeners, however, rely almost entirely on ILDs. One reason is that present-day clinical CI stimulation strategies do not convey salient ITD cues. But even when presenting ITDs under optimal conditions using a research interface, ITD sensitivity is lower in CI compared to NH listeners. Since it has recently been shown that NH listeners change their ITD/ILD weighting when only one of the cues is consistent with visual information, such reweighting might add to CI listeners’ low perceptual contribution of ITDs, given their daily exposure to reliable ILDs but unreliable ITDs. Six bilateral CI listeners completed a multi-day lateralization training visually reinforcing ITDs, flanked by a pre- and post-measurement of ITD/ILD weights without visual reinforcement. Using direct electric stimulation, we presented 100- and 300-pps pulse trains at a single interaurally place-matched electrode pair, conveying ITDs and ILDs in various spatially consistent and inconsistent combinations. The listeners’ task was to lateralize the stimuli in a virtual environment. Additionally, ITD and ILD thresholds were measured before and after training. For 100-pps stimuli, the lateralization training increased the contribution of ITDs slightly, but significantly. Thresholds were neither affected by the training nor correlated with weights. For 300-pps stimuli, ITD weights were lower and ITD thresholds larger, but there was no effect of training. On average across test sessions, adding azimuth-dependent ITDs to stimuli containing ILDs increased the extent of lateralization for both 100- and 300-pps stimuli. The results suggest that low-rate ITD cues, robustly encoded with future CI systems, may be better exploitable for sound localization after increasing their perceptual weight via training.


2009 ◽  
Vol 30 (4) ◽  
pp. 419-431 ◽  
Author(s):  
Ruth Y. Litovsky ◽  
Aaron Parkinson ◽  
Jennifer Arcaroli

Sign in / Sign up

Export Citation Format

Share Document