Spatial speech-in-noise performance in simulated single-sided deaf and bimodal cochlear implant users in comparison with real patients

Author(s):  
Tim Jürgens ◽  
Thomas Wesarg ◽  
Dirk Oetting ◽  
Lorenz Jung ◽  
Ben Williges
2019 ◽  
Author(s):  
Mark D. Fletcher ◽  
Amatullah Hadeedi ◽  
Tobias Goehring ◽  
Sean R Mills

Cochlear implant (CI) users receive only limited sound information through their implant, which means that they struggle to understand speech in noisy environments. Recent work has suggested that combining the electrical signal from the CI with a haptic signal that provides crucial missing sound information (“electro-haptic stimulation”; EHS) could improve speech-in-noise performance. The aim of the current study was to test whether EHS could enhance speech-in-noise performance in CI users using: (1) a tactile signal derived using an algorithm that could be applied in real time, (2) a stimulation site appropriate for a real-world application, and (3) a tactile signal that could readily be produced by a compact, portable device. We measured speech intelligibility in multi-talker noise with and without vibro-tactile stimulation of the wrist in CI users, before and after a short training regime. No effect of EHS was found before training, but after training EHS was found to improve the number of words correctly identified by an average of 8.3 %-points, with some users improving by more than 20 %-points. Our approach could offer an inexpensive and non-invasive means of improving speech-in-noise performance in CI users.


2021 ◽  
Vol 32 (08) ◽  
pp. 478-486
Author(s):  
Lisa G. Potts ◽  
Soo Jang ◽  
Cory L. Hillis

Abstract Background For cochlear implant (CI) recipients, speech recognition in noise is consistently poorer compared with recognition in quiet. Directional processing improves performance in noise and can be automatically activated based on acoustic scene analysis. The use of adaptive directionality with CI recipients is new and has not been investigated thoroughly, especially utilizing the recipients' preferred everyday signal processing, dynamic range, and/or noise reduction. Purpose This study utilized CI recipients' preferred everyday signal processing to evaluate four directional microphone options in a noisy environment to determine which option provides the best speech recognition in noise. A greater understanding of automatic directionality could ultimately improve CI recipients' speech-in-noise performance and better guide clinicians in programming. Study Sample Twenty-six unilateral and seven bilateral CI recipients with a mean age of 66 years and approximately 4 years of CI experience were included. Data Collection and Analysis Speech-in-noise performance was measured using eight loudspeakers in a 360-degree array with HINT sentences presented in restaurant noise. Four directional options were evaluated (automatic [SCAN], adaptive [Beam], fixed [Zoom], and Omni-directional) with participants' everyday use signal processing options active. A mixed-model analysis of variance (ANOVA) and pairwise comparisons were performed. Results Automatic directionality (SCAN) resulted in the best speech-in-noise performance, although not significantly better than Beam. Omni-directional performance was significantly poorer compared with the three other directional options. A varied number of participants performed their best with each of the four-directional options, with 16 performing best with automatic directionality. The majority of participants did not perform best with their everyday directional option. Conclusion The individual variability seen in this study suggests that CI recipients try with different directional options to find their ideal program. However, based on a CI recipient's motivation to try different programs, automatic directionality is an appropriate everyday processing option.


2019 ◽  
Vol 23 ◽  
pp. 233121651985831 ◽  
Author(s):  
Ben Williges ◽  
Thomas Wesarg ◽  
Lorenz Jung ◽  
Leontien I. Geven ◽  
Andreas Radeloff ◽  
...  

This study compared spatial speech-in-noise performance in two cochlear implant (CI) patient groups: bimodal listeners, who use a hearing aid contralaterally to support their impaired acoustic hearing, and listeners with contralateral normal hearing, i.e., who were single-sided deaf before implantation. Using a laboratory setting that controls for head movements and that simulates spatial acoustic scenes, speech reception thresholds were measured for frontal speech-in-stationary noise from the front, the left, or the right side. Spatial release from masking (SRM) was then extracted from speech reception thresholds for monaural and binaural listening. SRM was found to be significantly lower in bimodal CI than in CI single-sided deaf listeners. Within each listener group, the SRM extracted from monaural listening did not differ from the SRM extracted from binaural listening. In contrast, a normal-hearing control group showed a significant improvement in SRM when using two ears in comparison to one. Neither CI group showed a binaural summation effect; that is, their performance was not improved by using two devices instead of the best monaural device in each spatial scenario. The results confirm a “listening with the better ear” strategy in the two CI patient groups, where patients benefited from using two ears/devices instead of one by selectively attending to the better one. Which one is the better ear, however, depends on the spatial scenario and on the individual configuration of hearing loss.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Mark D. Fletcher ◽  
Amatullah Hadeedi ◽  
Tobias Goehring ◽  
Sean R. Mills

2018 ◽  
Vol 367 ◽  
pp. 223-230 ◽  
Author(s):  
Damien Bonnard ◽  
Adam Schwalje ◽  
Bruce Gantz ◽  
Inyong Choi

2021 ◽  
Author(s):  
Joel I. Berger ◽  
Phillip E. Gander ◽  
Subong Kim ◽  
Adam T. Schwalje ◽  
Jihwan Woo ◽  
...  

AbstractObjectivesUnderstanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group (Kim et al., 2021, Neuroimage) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The current study examined neural predictors of speech-in-noise ability in a large cohort of cochlear-implant (CI) users, with the long-term goal of developing a simple electrophysiological correlate that could be implemented in clinics.DesignWe recorded electroencephalography (EEG) in 114 post-lingually deafened CI users while they completed the California Consonant Test (CCT): a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (Consonant-Nucleus-Consonant [CNC]) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a single vertex electrode (Cz), to maximize generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of speech in noise performance.ResultsIn general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the CCT (which was conducted simultaneously with EEG recording), and the CNC (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise.ConclusionsThese data indicate a neurophysiological correlate of speech-in-noise performance that can be relatively easily captured within the clinic, thereby revealing a richer profile of an individual’s hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.


2018 ◽  
Vol 36 (2) ◽  
pp. 156-174 ◽  
Author(s):  
Ritva Torppa ◽  
Andrew Faulkner ◽  
Teija Kujala ◽  
Minna Huotilainen ◽  
Jari Lipsanen

The perception of speech in noise is challenging for children with cochlear implants (CIs). Singing and musical instrument playing have been associated with improved auditory skills in normal-hearing (NH) children. Therefore, we assessed how children with CIs who sing informally develop in the perception of speech in noise compared to those who do not. We also sought evidence of links of speech perception in noise with MMN and P3a brain responses to musical sounds and studied effects of age and changes over a 14–17 month time period in the speech-in-noise performance of children with CIs. Compared to the NH group, the entire CI group was less tolerant of noise in speech perception, but both groups improved similarly. The CI singing group showed better speech-in-noise perception than the CI non-singing group. The perception of speech in noise in children with CIs was associated with the amplitude of MMN to a change of sound from piano to cymbal, and in the CI singing group only, with earlier P3a for changes in timbre. While our results cannot address causality, they suggest that singing and musical instrument playing may have a potential to enhance the perception of speech in noise in children with CIs.


2019 ◽  
Vol 28 (1) ◽  
pp. 1-10 ◽  
Author(s):  
Jantien L. Vroegop ◽  
J. Gertjan Dingemanse ◽  
Marc P. van der Schroeff ◽  
André Goedegebure

PurposeThe aim of the study was to investigate the effect of 3 hearing aid fitting procedures on provided gain of the hearing aid in bimodal cochlear implant users and their effect on bimodal benefit.MethodThis prospective study measured hearing aid gain and auditory performance in a cross-over design in which 3 hearing aid fitting methods were compared. Hearing aid fitting methods differed in initial gain prescription rule (NAL-NL2 and Audiogram+) and loudness balancing method (broadband vs. narrowband loudness balancing). Auditory functioning was evaluated by a speech-in-quiet test, a speech-in-noise test, and a sound localization test. Fourteen postlingually deafened adult bimodal cochlear implant users participated in the study.ResultsNo differences in provided gain and in bimodal performance were found for the different hearing aid fittings. For all hearing aid fittings, a bimodal benefit was found for speech in noise and sound localization.ConclusionOur results confirm that cochlear implant users with residual hearing in the contralateral ear substantially benefit from bimodal stimulation. However, on average, no differences were found between different types of fitting methods, varying in prescription rule and loudness balancing method.


Sign in / Sign up

Export Citation Format

Share Document