Improving Speech Perception in Noise for Children with Cochlear Implants

2011 ◽  
Vol 22 (09) ◽  
pp. 623-632 ◽  
Author(s):  
René H. Gifford ◽  
Amy P. Olund ◽  
Melissa DeJong

Background: Current cochlear implant recipients are achieving increasingly higher levels of speech recognition; however, the presence of background noise continues to significantly degrade speech understanding for even the best performers. Newer generation Nucleus cochlear implant sound processors can be programmed with SmartSound strategies that have been shown to improve speech understanding in noise for adult cochlear implant recipients. The applicability of these strategies for use in children, however, is not fully understood nor widely accepted. Purpose: To assess speech perception for pediatric cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE™) array in order to determine whether Nucleus sound processor SmartSound strategies yield improved sentence recognition in noise for children who learn language through the implant. Research Design: Single subject, repeated measures design. Study Sample: Twenty-two experimental subjects with cochlear implants (mean age 11.1 yr) and 25 control subjects with normal hearing (mean age 9.6 yr) participated in this prospective study. Intervention: Speech reception thresholds (SRT) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the experimental subjects’ everyday program incorporating Adaptive Dynamic Range Optimization (ADRO) as well as with the addition of Autosensitivity control (ASC). Data Collection and Analysis: Adaptive SRTs with the Hearing In Noise Test (HINT) sentences were obtained for all 22 experimental subjects, and performance—in percent correct—was assessed in a fixed +6 dB SNR (signal-to-noise ratio) for a six-subject subset. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the SmartSound setting on the SRT in noise. Results: The primary findings mirrored those reported previously with adult cochlear implant recipients in that the addition of ASC to ADRO significantly improved speech recognition in noise for pediatric cochlear implant recipients. The mean degree of improvement in the SRT with the addition of ASC to ADRO was 3.5 dB for a mean SRT of 10.9 dB SNR. Thus, despite the fact that these children have acquired auditory/oral speech and language through the use of their cochlear implant(s) equipped with ADRO, the addition of ASC significantly improved their ability to recognize speech in high levels of diffuse background noise. The mean SRT for the control subjects with normal hearing was 0.0 dB SNR. Given that the mean SRT for the experimental group was 10.9 dB SNR, despite the improvements in performance observed with the addition of ASC, cochlear implants still do not completely overcome the speech perception deficit encountered in noisy environments accompanying the diagnosis of severe-to-profound hearing loss. Conclusion: SmartSound strategies currently available in latest generation Nucleus cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise for pediatric cochlear implant recipients. Despite the reluctance of pediatric audiologists to utilize SmartSound settings for regular use, the results of the current study support the addition of ASC to ADRO for everyday listening environments to improve speech perception in a child's typical everyday program.

2018 ◽  
Vol 29 (09) ◽  
pp. 814-825 ◽  
Author(s):  
Patti M. Johnstone ◽  
Kristen E. T. Mills ◽  
Elizabeth Humphrey ◽  
Kelly R. Yeager ◽  
Emily Jones ◽  
...  

AbstractCochlear implant (CI) users are affected more than their normal hearing (NH) peers by the negative consequences of background noise on speech understanding. Research has shown that adult CI users can improve their speech recognition in challenging listening environments by using dual-microphone beamformers, such as adaptive directional microphones (ADMs) and wireless remote microphones (RMs). The suitability of these microphone technologies for use in children with CIs is not well-understood nor widely accepted.To assess the benefit of ADM or RM technology on speech perception in background noise in children and adolescents with cochlear implants (CIs) with no previous or current use of ADM or RM.Mixed, repeated measures design.Twenty (20) children, ten (10) CI users (mean age 14.3 yrs) who used Advanced Bionics HiRes90K implants with research Naida processors, and ten (10) NH age-matched controls participated in this prospective study.CI users listened with an ear-canal level microphone, T-Mic (TM), an ADM, and a wireless RM at different audio-mixing ratios. Speech understanding with five microphone settings (TM 100%, ADM, RM + TM 50/50, RM + TM 75/25, RM 100%) was evaluated in quiet and in noise.Speech perception ability was measured using children’s spondee words to obtain a speech recognition threshold for 80% accuracy (SRT80%) in 20-talker babble where the listener sat in a sound booth 1 m (3.28′) from the target speech (front) and noise (behind) to test five microphone settings (TM 100%, ADM, RM + TM 50/50, RM + TM 75/25, RM 100%). Group performance-intensity functions were computed for each listening condition to show the effects of microphone configuration with respect to signal-to-noise ratio (SNR). A difference score (CI Group minus NH Group) was computed to show the effect of microphone technology at different SNRs relative to NH. Statistical analysis using a repeated-measures analysis of variance evaluated the effects of the microphone configurations on SRT80% and performance at SNRs. Between-groups analysis of variance was used to compare the CI group with the NH group.The speech recognition was significantly poorer for children with CI than children with NH in quiet and in noise when using the TM alone. Adding the ADM or RM provided a significant improvement in speech recognition for the CI group over use of the TM alone in noise (mean dB advantage ranged from 5.8 for ADM to 16 for RM100). When children with CI used the RM75 or RM100 in background babble, speech recognition was not statistically different from the group with NH.Speech recognition in noise performance improved with the use of ADM and RM100 or RM75 over TM-only for children with CIs. Alhough children with CI remain at a disadvantage as compared with NH children in quiet and more favorable SNRs, microphone technology can enhance performance for some children with CI to match that of NH peers in contexts with negative SNRs.


Author(s):  
Jourdan T. Holder ◽  
Adrian L. Taylor ◽  
Linsey W. Sunderhaus ◽  
Rene H. Gifford

Background: Despite improvements in cochlear implant (CI) technology, pediatric CI recipients continueto have more difficulty understanding speech than their typically hearing peers in background noise. Avariety of strategies have been evaluated to help mitigate this disparity, such as signal processing, remotemicrophone technology, and microphone placement. Previous studies regarding microphoneplacement used speech processors that are now dated, and most studies investigating the improvementof speech recognition in background noise included adult listeners only.Purpose: The purpose of the present study was to investigate the effects of microphone location andbeamforming technology on speech understanding for pediatric CI recipients in noise.Research Design: A prospective, repeated-measures, within-participant design was used to compareperformance across listening conditions.Study Sample: A total of nine children (aged 6.6 to 15.3 years) with at least one Advanced Bionics CIwere recruited for this study.Data Collection and Analysis: The Basic English Lexicon Sentences and AzBio Sentences were presentedat 0° azimuth at 65-dB SPL in +5 signal-to-noise ratio noise presented from seven speakers usingthe R-SPACE system (Advanced Bionics, Valencia, CA). Performance was compared across three omnidirectionalmicrophone configurations (processor microphone, T-Mic 2, and processor + T-Mic 2) andtwo directional microphone configurations (UltraZoom and auto UltraZoom). The two youngest participantswere not tested in the directional microphone configurations.Results: No significant differences were found between the various omnidirectional microphone configurations.UltraZoom provided significant benefit over all omnidirectional microphone configurations(T-Mic 2, p = 0.004, processor microphone, p < 0.001, and processor microphone + T-Mic 2, p = 0.018)but was not significantly different from auto UltraZoom (p = 0.176).Conclusions: All omnidirectional microphone configurations yielded similar performance, suggesting thata child’s listening performance in noise will not be compromised by choosing the microphone configurationbest suited for the child. UltraZoom (adaptive beamformer) yielded higher performance than all omnidirectional microphonesin moderate background noise for adolescents aged 9 to 15 years. The implicationsof these data suggest that for older children who are able to reliably use manual controls, UltraZoom willyield significantly higher performance in background noise when the target is in front of the listener.


2010 ◽  
Vol 21 (07) ◽  
pp. 441-451 ◽  
Author(s):  
René H. Gifford ◽  
Lawrence J. Revit

Background: Although cochlear implant patients are achieving increasingly higher levels of performance, speech perception in noise continues to be problematic. The newest generations of implant speech processors are equipped with preprocessing and/or external accessories that are purported to improve listening in noise. Most speech perception measures in the clinical setting, however, do not provide a close approximation to real-world listening environments. Purpose: To assess speech perception for adult cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE™) array in order to determine whether commercially available preprocessing strategies and/or external accessories yield improved sentence recognition in noise. Research Design: Single-subject, repeated-measures design with two groups of participants: Advanced Bionics and Cochlear Corporation recipients. Study Sample: Thirty-four subjects, ranging in age from 18 to 90 yr (mean 54.5 yr), participated in this prospective study. Fourteen subjects were Advanced Bionics recipients, and 20 subjects were Cochlear Corporation recipients. Intervention: Speech reception thresholds (SRTs) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the subjects' preferred listening programs as well as with the addition of either Beam™ preprocessing (Cochlear Corporation) or the T-Mic® accessory option (Advanced Bionics). Data Collection and Analysis: In Experiment 1, adaptive SRTs with the Hearing in Noise Test sentences were obtained for all 34 subjects. For Cochlear Corporation recipients, SRTs were obtained with their preferred everyday listening program as well as with the addition of Focus preprocessing. For Advanced Bionics recipients, SRTs were obtained with the integrated behind-the-ear (BTE) mic as well as with the T-Mic. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the preprocessing strategy or external accessory in reducing the SRT in noise. In addition, a standard t-test was run to evaluate effectiveness across manufacturer for improving the SRT in noise. In Experiment 2, 16 of the 20 Cochlear Corporation subjects were reassessed obtaining an SRT in noise using the manufacturer-suggested “Everyday,” “Noise,” and “Focus” preprocessing strategies. A repeated-measures ANOVA was employed to assess the effects of preprocessing. Results: The primary findings were (i) both Noise and Focus preprocessing strategies (Cochlear Corporation) significantly improved the SRT in noise as compared to Everyday preprocessing, (ii) the T-Mic accessory option (Advanced Bionics) significantly improved the SRT as compared to the BTE mic, and (iii) Focus preprocessing and the T-Mic resulted in similar degrees of improvement that were not found to be significantly different from one another. Conclusion: Options available in current cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise with both Cochlear Corporation and Advanced Bionics systems. For Cochlear Corporation recipients, Focus preprocessing yields the best speech-recognition performance in a complex listening environment; however, it is recommended that Noise preprocessing be used as the new default for everyday listening environments to avoid the need for switching programs throughout the day. For Advanced Bionics recipients, the T-Mic offers significantly improved performance in noise and is recommended for everyday use in all listening environments.


2012 ◽  
Vol 23 (07) ◽  
pp. 501-509 ◽  
Author(s):  
Erin C. Schafer ◽  
Jody Pogue ◽  
Tyler Milrany

Background: Speech recognition abilities of adults and children using cochlear implants (CIs) are significantly degraded in the presence of background noise, making this an important area of study and assessment by CI manufacturers, researchers, and audiologists. However, at this time there are a limited number of fixed-intensity sentence recognition tests available that also have multiple, equally intelligible lists in noise. One measure of speech recognition, the AzBio Sentence Test, provides 10-talker babble on the commercially available compact disc; however, there is no published evidence to support equivalency of the 15-sentence lists in noise for listeners with normal hearing (NH) or CIs. Furthermore, there is limited or no published data on the reliability, validity, and normative data for this test in noise for listeners with CIs or NH. Purpose: The primary goals of this study were to examine the equivalency of the AzBio Sentence Test lists at two signal-to-noise ratios (SNRs) in participants with NH and at one SNR for participants with CIs. Analyses were also conducted to establish the reliability, validity, and preliminary normative data for the AzBio Sentence Test for listeners with NH and CIs. Research Design: A cross-sectional, repeated measures design was used to assess speech recognition in noise for participants with NH or CIs. Study Sample: The sample included 14 adults with NH and 12 adults or adolescents with Cochlear Freedom CI sound processors. Participants were recruited from the University of North Texas clinic population or from local CI centers. Data Collection and Analysis: Speech recognition was assessed using the 15 lists of the AzBio Sentence Test and the 10-talker babble. With the intensity of the sentences fixed at 73 dB SPL, listeners with NH were tested at 0 and −3 dB SNRs, and participants with CIs were tested at a +10 dB SNR. Repeated measures analysis of variance (ANOVA) was used to analyze the data. Results: The primary analyses revealed significant differences in performance across the 15 lists on the AzBio Sentence Test for listeners with NH and CIs. However, a follow-up analysis revealed no significant differences in performance across 10 of the 15 lists. Using the 10, equally-intelligible lists, a comparison of speech recognition performance across the two groups suggested similar performance between NH participants at a −3 dB SNR and the CI users at a +10 SNR. Several additional analyses were conducted to support the reliability and validity of the 10 equally intelligible AzBio sentence lists in noise, and preliminary normative data were provided. Conclusions: Ten lists of the commercial version of the AzBio Sentence Test may be used as a reliable and valid measure of speech recognition in noise in listeners with NH or CIs. The equivalent lists may be used for a variety of purposes including audiological evaluations, determination of CI candidacy, hearing aid and CI programming considerations, research, and recommendations for hearing assistive technology. In addition, the preliminary normative data provided in this study establishes a starting point for the creation of comprehensive normative data for the AzBio Sentence Test.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257568
Author(s):  
Xiao Gao ◽  
David Grayden ◽  
Mark McDonnell

Despite the development and success of cochlear implants over several decades, wide inter-subject variability in speech perception is reported. This suggests that cochlear implant user-dependent factors limit speech perception at the individual level. Clinical studies have demonstrated the importance of the number, placement, and insertion depths of electrodes on speech recognition abilities. However, these do not account for all inter-subject variability and to what extent these factors affect speech recognition abilities has not been studied. In this paper, an information theoretic method and machine learning technique are unified in a model to investigate the extent to which key factors limit cochlear implant electrode discrimination. The framework uses a neural network classifier to predict which electrode is stimulated for a given simulated activation pattern of the auditory nerve, and mutual information is then estimated between the actual stimulated electrode and predicted ones. We also investigate how and to what extent the choices of parameters affect the performance of the model. The advantages of this framework include i) electrode discrimination ability is quantified using information theory, ii) it provides a flexible framework that may be used to investigate the key factors that limit the performance of cochlear implant users, and iii) it provides insights for future modeling studies of other types of neural prostheses.


2002 ◽  
Vol 11 (2) ◽  
pp. 124-127 ◽  
Author(s):  
Robert V. Shannon

Speech understanding with cochlear implants has improved steadily over the last 25 years, and the success of implants has provided a powerful tool for understanding speech recognition in general. Comparing speech recognition in normal-hearing listeners and in cochlear-implant listeners has revealed many important lessons about the types of information necessary for good speech recognition—and some of the lessons are surprising. This paper presents a summary of speech perception research over the last 25 years with cochlear-implant and normal-hearing listeners. As long as the speech is audible, even the relatively severe amplitude distortion has only a mild effect on intelligibility. Temporal cues appear to be useful for speech intelligibility only up to about 20 Hz. Whereas temporal information above 20 Hz may contribute to improved quality, it contributes little to speech understanding. In contrast, the quantity and quality of spectral information appear to be critical for speech understanding. Only four spectral "channels" of information can produce good speech understanding, but more channels are required for difficult listening situations. Speech understanding is sensitive to the placement of spectral information along the cochlea. In prosthetic devices, in which the spectral information can be delivered to any cochlear location, it is critical to present spectral information to the normal acoustic tonotopic location for that information. If there is a shift or distortion of 2 to 3 mm between frequency and cochlear place, speech recognition is decreased dramatically.


2010 ◽  
Vol 21 (06) ◽  
pp. 380-389 ◽  
Author(s):  
Hugh McDermott ◽  
Katherine Henshall

Background: The number of cochlear implant (CI) recipients who have usable acoustic hearing in at least one ear is continuing to grow. Many such CI users gain perceptual benefits from the simultaneous use of acoustic and electric hearing. In particular, it has been shown previously that use of an acoustic hearing aid (HA) with a CI can often improve speech understanding in noise. Purpose: To determine whether the application of frequency compression in an HA would provide perceptual benefits to CI recipients with usable acoustic hearing, either when used in combination with the CI or when the HA was used by itself. Research Design: A repeated-measures experimental design was used to evaluate the effects on speech perception of using a CI either alone or simultaneously with an HA that had frequency compression either enabled or disabled. Study Sample: Eight adult CI recipients who were successful users of acoustic hearing aids in their nonimplanted ears participated as subjects. Intervention: The speech perception of each subject was assessed in seven conditions. These required each subject to listen with (1) their own HA alone; (2) the Phonak Naida HA with frequency compression (SoundRecover) enabled; (3) the Naida with SoundRecover disabled; (4) their CI alone; (5) their CI and their own HA; (6) their CI and the Naida with SoundRecover enabled; and (7) their CI and the Naida with SoundRecover disabled. Test sessions were scheduled over a period of about 10 wk. During part of that time, the subjects were asked to use the Phonak Naida HA with their CIs in place of their own HAs. Data Collection and Analysis: The speech perception tests included measures of consonant identification from a closed set of 12 items presented in quiet, and measures of sentence understanding in babble noise. The speech materials were presented at an average level of 60 dB SPL from a loudspeaker. Results: Speech perception was better, on average, in all conditions that included use of the CI in comparison with any condition in which only an HA was used. For example, consonant recognition improved by approximately 50 percentage points, on average, between the HA-alone listening conditions and the CI-alone condition. There were no statistically significant score differences between conditions with SoundRecover enabled and disabled. There was a small but significant improvement in the average signal-to-noise ratio (SNR) required to understand 50% of the words in the sentences presented in noise when an HA was used simultaneously with the CI. Conclusions: Although each of these CI users readily accepted the Phonak Naida HA with SoundRecover frequency compression, no benefits related specifically to the use of SoundRecover were found in the particular tests of speech understanding applied in this study. The relatively high levels of perceptual performance attained by these subjects with use of a CI by itself are consistent with the finding that the addition of an HA provided little further benefit. However, the use of an HA with the CI did provide better performance than the CI alone for understanding sentences in noise.


2020 ◽  
Author(s):  
Chelsea Blankenship ◽  
Jareen Meinzen-Derr ◽  
Fawen Zhang

Objective: Individual differences in temporal processing contributes strongly to the large variability in speech recognition performance observed among cochlear implant (CI) recipients. Temporal processing is traditionally measured using a behavioral gap detection task, and therefore, it can be challenging or infeasible to obtain reliable responses from young children and individuals with disabilities. Within-frequency gap detection (pre- and post-gap markers are identical in frequency) is more common, yet across-frequency gap detection (pre- and post-gap markers are spectrally distinct), is thought to be more important for speech perception because the phonemes that proceed and follow the rapid temporal cues are rarely identical in frequency. However, limited studies have examined across-frequency temporal processing in CI recipients. None of which have included across-frequency cortical auditory evoked potentials (CAEP), nor was the correlation between across-frequency gap detection and speech perception examined. The purpose of the study is to evaluate behavioral and electrophysiological measures of across-frequency temporal processing and speech recognition in normal hearing (NH) and CI recipients. Design: Eleven post-lingually deafened adult CI recipients (n = 15 ears, mean age = 50.4 yrs.) and eleven age- and gender-matched NH individuals participated (n = 15 ears; mean age = 49.0 yrs.). Speech perception was evaluated using the Minimum Speech Test Battery for Adult Cochlear Implant Users (CNC, AzBio, BKB-SIN). Across-frequency behavioral gap detection thresholds (GDT; 2 kHz to 1 kHz post-gap tone) were measured using an adaptive, two-alternative, forced-choice paradigm. Across-frequency CAEPs were measured using four gap duration conditions; supra-threshold (behavioral GDT x 3), threshold (behavioral GDT), sub-threshold (behavioral GDT/3), and reference (no gap) condition. Group differences in behavioral GDTs, and CAEP amplitude and latency were evaluated using multiple mixed effects models. Bivariate and multivariate canonical correlation analyses were used to evaluate the relationship between the CAEP amplitude and latency, behavioral GDTs, and speech perception. Results: A significant effect of participant group was not observed for across-frequency GDTs, instead older participants (> 50 yrs.) displayed larger GDTs than younger participants. CI recipients displayed increased P1 and N1 latency compared to NH participants and older participants displayed delayed N1 and P2 latency compared to younger adults. Bivariate correlation analysis between behavioral GDTs and speech perception measures were not significant (p > 0.01). Across-frequency canonical correlation analysis showed a significant relationship between CAEP reference condition and behavioral measures of speech perception and temporal processing. Conclusions: CI recipients show similar across-frequency temporal GDTs compared to NH participants, however older participants (> 50 yrs.) displayed poorer temporal processing (larger GDTs) compared to younger participants. CI recipients and older participants displayed less efficient neural processing of the acoustic stimulus and slower transmission to the auditory cortex. An effect of gap duration on CAEP amplitude or latency was not observed. Canonical correlation analysis suggests better cortical detection of frequency changes is correlated with better word and sentence understanding in quiet and noise.


2020 ◽  
Vol 29 (4) ◽  
pp. 851-861
Author(s):  
Camille Dunn ◽  
Sharon E. Miller ◽  
Erin C. Schafer ◽  
Christopher Silva ◽  
René H. Gifford ◽  
...  

Purpose This retrospective study used a cochlear implant registry to determine how performing speech recognition candidacy testing in quiet versus noise influenced patient selection, speech recognition, and self-report outcomes. Method Database queries identified 1,611 cochlear implant recipients who were divided into three implant candidacy qualifying groups based on preoperative speech perception scores (≤ 40% correct) on the AzBio sentence test: quiet qualifying group, +10 dB SNR qualifying group, and +5 dB SNR qualifying group. These groups were evaluated for demographic and preoperative hearing characteristics. Repeated-measures analysis of variance was used to compare pre- and postoperative performance on the AzBio in quiet and noise with qualifying group as a between-subjects factor. For a subset of recipients, pre- to postoperative changes on the Speech, Spatial and Qualities of Hearing Scale were also evaluated. Results Of the 1,611 patients identified as cochlear implant candidates, 63% of recipients qualified in quiet, 10% qualified in a +10 dB SNR, and 27% qualified in a +5 dB SNR. Postoperative speech perception scores in quiet and noise significantly improved for all qualifying groups. Across qualifying groups, the greatest speech perception improvements were observed when tested in the same qualifying listening condition. For a subset of patients, the total Speech, Spatial and Qualities of Hearing Scale ratings improved significantly as well. Conclusion Patients who qualified for cochlear implantation in quiet or background noise test conditions showed significant improvement in speech perception and quality of life scores, especially when the qualifying noise condition was used to track performance.


2021 ◽  
Vol 12 (1) ◽  
pp. 33
Author(s):  
Andres Camarena ◽  
Grace Manchala ◽  
Julianne Papadopoulos ◽  
Samantha R. O’Connell ◽  
Raymond L. Goldsworthy

Cochlear implants have been used to restore hearing to more than half a million people around the world. The restored hearing allows most recipients to understand spoken speech without relying on visual cues. While speech comprehension in quiet is generally high for recipients, many complain about the sound of music. The present study examines consonance and dissonance perception in nine cochlear implant users and eight people with no known hearing loss. Participants completed web-based assessments to characterize low-level psychophysical sensitivities to modulation and pitch, as well as higher-level measures of musical pleasantness and speech comprehension in background noise. The underlying hypothesis is that sensitivity to modulation and pitch, in addition to higher levels of musical sophistication, relate to higher-level measures of music and speech perception. This hypothesis tested true with strong correlations observed between measures of modulation and pitch with measures of consonance ratings and speech recognition. Additionally, the cochlear implant users who were the most sensitive to modulations and pitch, and who had higher musical sophistication scores, had similar pleasantness ratings as those with no known hearing loss. The implication is that better coding and focused rehabilitation for modulation and pitch sensitivity will broadly improve perception of music and speech for cochlear implant users.


Sign in / Sign up

Export Citation Format

Share Document