Effect of Microphone Location and Beamforming Technology on Speech Recognition in Pediatric Cochlear Implant Recipients

Author(s):  
Jourdan T. Holder ◽  
Adrian L. Taylor ◽  
Linsey W. Sunderhaus ◽  
Rene H. Gifford

Background: Despite improvements in cochlear implant (CI) technology, pediatric CI recipients continueto have more difficulty understanding speech than their typically hearing peers in background noise. Avariety of strategies have been evaluated to help mitigate this disparity, such as signal processing, remotemicrophone technology, and microphone placement. Previous studies regarding microphoneplacement used speech processors that are now dated, and most studies investigating the improvementof speech recognition in background noise included adult listeners only.Purpose: The purpose of the present study was to investigate the effects of microphone location andbeamforming technology on speech understanding for pediatric CI recipients in noise.Research Design: A prospective, repeated-measures, within-participant design was used to compareperformance across listening conditions.Study Sample: A total of nine children (aged 6.6 to 15.3 years) with at least one Advanced Bionics CIwere recruited for this study.Data Collection and Analysis: The Basic English Lexicon Sentences and AzBio Sentences were presentedat 0° azimuth at 65-dB SPL in +5 signal-to-noise ratio noise presented from seven speakers usingthe R-SPACE system (Advanced Bionics, Valencia, CA). Performance was compared across three omnidirectionalmicrophone configurations (processor microphone, T-Mic 2, and processor + T-Mic 2) andtwo directional microphone configurations (UltraZoom and auto UltraZoom). The two youngest participantswere not tested in the directional microphone configurations.Results: No significant differences were found between the various omnidirectional microphone configurations.UltraZoom provided significant benefit over all omnidirectional microphone configurations(T-Mic 2, p = 0.004, processor microphone, p < 0.001, and processor microphone + T-Mic 2, p = 0.018)but was not significantly different from auto UltraZoom (p = 0.176).Conclusions: All omnidirectional microphone configurations yielded similar performance, suggesting thata child’s listening performance in noise will not be compromised by choosing the microphone configurationbest suited for the child. UltraZoom (adaptive beamformer) yielded higher performance than all omnidirectional microphonesin moderate background noise for adolescents aged 9 to 15 years. The implicationsof these data suggest that for older children who are able to reliably use manual controls, UltraZoom willyield significantly higher performance in background noise when the target is in front of the listener.

2011 ◽  
Vol 22 (09) ◽  
pp. 623-632 ◽  
Author(s):  
René H. Gifford ◽  
Amy P. Olund ◽  
Melissa DeJong

Background: Current cochlear implant recipients are achieving increasingly higher levels of speech recognition; however, the presence of background noise continues to significantly degrade speech understanding for even the best performers. Newer generation Nucleus cochlear implant sound processors can be programmed with SmartSound strategies that have been shown to improve speech understanding in noise for adult cochlear implant recipients. The applicability of these strategies for use in children, however, is not fully understood nor widely accepted. Purpose: To assess speech perception for pediatric cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE™) array in order to determine whether Nucleus sound processor SmartSound strategies yield improved sentence recognition in noise for children who learn language through the implant. Research Design: Single subject, repeated measures design. Study Sample: Twenty-two experimental subjects with cochlear implants (mean age 11.1 yr) and 25 control subjects with normal hearing (mean age 9.6 yr) participated in this prospective study. Intervention: Speech reception thresholds (SRT) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the experimental subjects’ everyday program incorporating Adaptive Dynamic Range Optimization (ADRO) as well as with the addition of Autosensitivity control (ASC). Data Collection and Analysis: Adaptive SRTs with the Hearing In Noise Test (HINT) sentences were obtained for all 22 experimental subjects, and performance—in percent correct—was assessed in a fixed +6 dB SNR (signal-to-noise ratio) for a six-subject subset. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the SmartSound setting on the SRT in noise. Results: The primary findings mirrored those reported previously with adult cochlear implant recipients in that the addition of ASC to ADRO significantly improved speech recognition in noise for pediatric cochlear implant recipients. The mean degree of improvement in the SRT with the addition of ASC to ADRO was 3.5 dB for a mean SRT of 10.9 dB SNR. Thus, despite the fact that these children have acquired auditory/oral speech and language through the use of their cochlear implant(s) equipped with ADRO, the addition of ASC significantly improved their ability to recognize speech in high levels of diffuse background noise. The mean SRT for the control subjects with normal hearing was 0.0 dB SNR. Given that the mean SRT for the experimental group was 10.9 dB SNR, despite the improvements in performance observed with the addition of ASC, cochlear implants still do not completely overcome the speech perception deficit encountered in noisy environments accompanying the diagnosis of severe-to-profound hearing loss. Conclusion: SmartSound strategies currently available in latest generation Nucleus cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise for pediatric cochlear implant recipients. Despite the reluctance of pediatric audiologists to utilize SmartSound settings for regular use, the results of the current study support the addition of ASC to ADRO for everyday listening environments to improve speech perception in a child's typical everyday program.


2015 ◽  
Vol 26 (06) ◽  
pp. 532-539 ◽  
Author(s):  
Jace Wolfe ◽  
Mila Morais ◽  
Erin Schafer

Background: Cochlear implant (CI) recipients experience difficulty understanding speech in noise. Remote-microphone technology that improves the signal-to-noise ratio is recognized as an effective means to improve speech recognition in noise; however, there are no published studies evaluating the potential benefits of a wireless, remote-microphone, digital, audio-streaming accessory device (heretofore referred to as a remote-microphone accessory) designed to deliver audio signals directly to a CI sound processor. Purpose: The objective of this study was to compare speech recognition in quiet and in noise of recipients while using their CI alone and with a remote-microphone accessory. Research Design: A two-way repeated measures design was used to evaluate performance differences obtained in quiet and in increasing levels of competing noise with the CI sound processor alone and with the sound processor paired to the remote microphone accessory. Study Sample: Sixteen users of Cochlear Nucleus 24 Freedom, CI512, and CI422 implants were included in the study. Data Collection and Analysis: Participants were evaluated in 14 conditions including use of the sound processor alone and with the remote-microphone accessory in quiet and at the following signal levels: 65 dBA speech (at the location of the participant; 85 dBA at the location of the remote microphone) in quiet and competing noise at 50, 55, 60, 65, 70, and 75 dBA noise levels. Speech recognition was evaluated in each of these conditions with one full list of AzBio sentences. Results: Speech recognition in quiet and in all competing noise levels, except the 75 dBA condition, was significantly better with use of the remote-microphone accessory compared with participants’ performance with the CI sound processor alone. As expected, in all technology conditions, performance was significantly poorer as the competing noise level increased. Conclusions: Use of a remote-microphone accessory designed for a CI sound processor provides superior speech recognition in quiet and in noise when compared with performance obtained with the CI sound processor alone.


2018 ◽  
Vol 29 (09) ◽  
pp. 814-825 ◽  
Author(s):  
Patti M. Johnstone ◽  
Kristen E. T. Mills ◽  
Elizabeth Humphrey ◽  
Kelly R. Yeager ◽  
Emily Jones ◽  
...  

AbstractCochlear implant (CI) users are affected more than their normal hearing (NH) peers by the negative consequences of background noise on speech understanding. Research has shown that adult CI users can improve their speech recognition in challenging listening environments by using dual-microphone beamformers, such as adaptive directional microphones (ADMs) and wireless remote microphones (RMs). The suitability of these microphone technologies for use in children with CIs is not well-understood nor widely accepted.To assess the benefit of ADM or RM technology on speech perception in background noise in children and adolescents with cochlear implants (CIs) with no previous or current use of ADM or RM.Mixed, repeated measures design.Twenty (20) children, ten (10) CI users (mean age 14.3 yrs) who used Advanced Bionics HiRes90K implants with research Naida processors, and ten (10) NH age-matched controls participated in this prospective study.CI users listened with an ear-canal level microphone, T-Mic (TM), an ADM, and a wireless RM at different audio-mixing ratios. Speech understanding with five microphone settings (TM 100%, ADM, RM + TM 50/50, RM + TM 75/25, RM 100%) was evaluated in quiet and in noise.Speech perception ability was measured using children’s spondee words to obtain a speech recognition threshold for 80% accuracy (SRT80%) in 20-talker babble where the listener sat in a sound booth 1 m (3.28′) from the target speech (front) and noise (behind) to test five microphone settings (TM 100%, ADM, RM + TM 50/50, RM + TM 75/25, RM 100%). Group performance-intensity functions were computed for each listening condition to show the effects of microphone configuration with respect to signal-to-noise ratio (SNR). A difference score (CI Group minus NH Group) was computed to show the effect of microphone technology at different SNRs relative to NH. Statistical analysis using a repeated-measures analysis of variance evaluated the effects of the microphone configurations on SRT80% and performance at SNRs. Between-groups analysis of variance was used to compare the CI group with the NH group.The speech recognition was significantly poorer for children with CI than children with NH in quiet and in noise when using the TM alone. Adding the ADM or RM provided a significant improvement in speech recognition for the CI group over use of the TM alone in noise (mean dB advantage ranged from 5.8 for ADM to 16 for RM100). When children with CI used the RM75 or RM100 in background babble, speech recognition was not statistically different from the group with NH.Speech recognition in noise performance improved with the use of ADM and RM100 or RM75 over TM-only for children with CIs. Alhough children with CI remain at a disadvantage as compared with NH children in quiet and more favorable SNRs, microphone technology can enhance performance for some children with CI to match that of NH peers in contexts with negative SNRs.


2015 ◽  
Vol 24 (1) ◽  
pp. 31-39 ◽  
Author(s):  
Douglas P. Sladen ◽  
Amanda Zappler

Purpose To determine whether older cochlear implant (CI) listeners differ from younger CI listeners on measures of speech understanding, music perception, and health-related quality of life (HRQoL). In the study, the authors hypothesized that speech recognition would be more difficult for older adults, especially in noisy conditions. Performance on music perception was expected to be lower for older implanted listeners. No differences between age groups were expected on HRQoL. Method Twenty older (>60 years) and 20 younger (<60 years) implanted adults participated. Speech understanding was assessed using words and sentences presented in quiet, and sentences presented at +15, +10, and +5 dB signal-to-noise ratio conditions. Music perception was tested using the University of Washington Clinical Assessment of Music, and HRQoL was measured using the Njimegen CI survey. Results Speech understanding was significantly lower for the older compared with the younger group in all conditions. Older implanted adults showed lower performance on music perception compared with younger implanted adults on 1 of 3 subtests. Older adults reported lower HRQoL benefit than younger adults on 3 of 6 subdomains. Conclusion Data indicate that older CI listeners performed more poorly than younger CI listeners, although group differences appear to be task specific.


2019 ◽  
Vol 30 (08) ◽  
pp. 731-734
Author(s):  
Michael F. Dorman ◽  
Sarah Cook Natale

AbstractWhen cochlear implant (CI) listeners use a directional microphone or beamformer system to improve speech understanding in noise, the gain in understanding for speech presented from the front of the listener coexists with a decrease in speech understanding from the back. One way to maximize the usefulness of these systems is to keep a microphone in the omnidirectional mode in low noise and then switch to directional mode in high noise.The purpose of this experiment was to assess the levels of speech understanding in noise allowed by a new signal processing algorithm for MED EL CIs, AutoAdaptive, which operates in the manner described previously.Seven listeners fit with bilateral CIs were tested in a simulation of a crowded restaurant with speech presented from the front and from the back at three noise levels, 45, 55, and 65 dB SPL.The listeners were seated in the middle of an array of eight loudspeakers. Sentences from the AzBio sentence lists were presented from loudspeakers at 0 or 180° azimuth. Restaurant noise at 45, 55, and 65 dB SPL was presented from all eight loudspeakers. The speech understanding scores (words correct) were subjected to a two-factor (speaker location and noise level), repeated measures, analysis of variance with posttests.The analysis of variance showed a main effect for level and location and a significant interaction. Posttests showed that speech understanding scores from front and back loudspeakers did not differ significantly at the 45- and 55-dB noise levels but did differ significantly at the 65-dB noise level—with increased scores for signals from the front and decreased scores for signals from the back.The AutoAdaptive feature provides omnidirectional benefit at low noise levels, i.e., similar levels of speech understanding for talkers in front of, and in back of, a listener and beamformer benefit at higher noise levels, i.e., increased speech understanding for signals from in front. The automatic switching feature will be of value to the many patients who prefer not to manually switch programs on their CIs.


2010 ◽  
Vol 21 (06) ◽  
pp. 380-389 ◽  
Author(s):  
Hugh McDermott ◽  
Katherine Henshall

Background: The number of cochlear implant (CI) recipients who have usable acoustic hearing in at least one ear is continuing to grow. Many such CI users gain perceptual benefits from the simultaneous use of acoustic and electric hearing. In particular, it has been shown previously that use of an acoustic hearing aid (HA) with a CI can often improve speech understanding in noise. Purpose: To determine whether the application of frequency compression in an HA would provide perceptual benefits to CI recipients with usable acoustic hearing, either when used in combination with the CI or when the HA was used by itself. Research Design: A repeated-measures experimental design was used to evaluate the effects on speech perception of using a CI either alone or simultaneously with an HA that had frequency compression either enabled or disabled. Study Sample: Eight adult CI recipients who were successful users of acoustic hearing aids in their nonimplanted ears participated as subjects. Intervention: The speech perception of each subject was assessed in seven conditions. These required each subject to listen with (1) their own HA alone; (2) the Phonak Naida HA with frequency compression (SoundRecover) enabled; (3) the Naida with SoundRecover disabled; (4) their CI alone; (5) their CI and their own HA; (6) their CI and the Naida with SoundRecover enabled; and (7) their CI and the Naida with SoundRecover disabled. Test sessions were scheduled over a period of about 10 wk. During part of that time, the subjects were asked to use the Phonak Naida HA with their CIs in place of their own HAs. Data Collection and Analysis: The speech perception tests included measures of consonant identification from a closed set of 12 items presented in quiet, and measures of sentence understanding in babble noise. The speech materials were presented at an average level of 60 dB SPL from a loudspeaker. Results: Speech perception was better, on average, in all conditions that included use of the CI in comparison with any condition in which only an HA was used. For example, consonant recognition improved by approximately 50 percentage points, on average, between the HA-alone listening conditions and the CI-alone condition. There were no statistically significant score differences between conditions with SoundRecover enabled and disabled. There was a small but significant improvement in the average signal-to-noise ratio (SNR) required to understand 50% of the words in the sentences presented in noise when an HA was used simultaneously with the CI. Conclusions: Although each of these CI users readily accepted the Phonak Naida HA with SoundRecover frequency compression, no benefits related specifically to the use of SoundRecover were found in the particular tests of speech understanding applied in this study. The relatively high levels of perceptual performance attained by these subjects with use of a CI by itself are consistent with the finding that the addition of an HA provided little further benefit. However, the use of an HA with the CI did provide better performance than the CI alone for understanding sentences in noise.


2010 ◽  
Vol 21 (08) ◽  
pp. 546-557 ◽  
Author(s):  
Kristi Oeding ◽  
Michael Valente ◽  
Jessica Kerckhoff

Background: Patients with unilateral sensorineural hearing loss (USNHL) experience great difficulty listening to speech in noisy environments. A directional microphone (DM) could potentially improve speech recognition in this difficult listening environment. It is well known that DMs in behind-the-ear (BTE) and custom hearing aids can provide a greater signal-to-noise ratio (SNR) in comparison to an omnidirectional microphone (OM) to improve speech recognition in noise for persons with hearing impairment. Studies examining the DM in bone anchored auditory osseointegrated implants (Baha), however, have been mixed, with little to no benefit reported for the DM compared to an OM. Purpose: The primary purpose of this study was to determine if there are statistically significant differences in the mean reception threshold for sentences (RTS in dB) in noise between the OM and DM in the Baha® Divino™. The RTS of these two microphone modes was measured utilizing two loudspeaker arrays (speech from 0° and noise from 180° or a diffuse eight-loudspeaker array) and with the better ear open or closed with an earmold impression and noise attenuating earmuff. Subjective benefit was assessed using the Abbreviated Profile of Hearing Aid Benefit (APHAB) to compare unaided and aided (Divino OM and DM combined) problem scores. Research Design: A repeated measures design was utilized, with each subject counterbalanced to each of the eight treatment levels for three independent variables: (1) microphone (OM and DM), (2) loudspeaker array (180° and diffuse), and (3) better ear (open and closed). Study Sample: Sixteen subjects with USNHL currently utilizing the Baha were recruited from Washington University's Center for Advanced Medicine and the surrounding area. Data Collection and Analysis: Subjects were tested at the initial visit if they entered the study wearing the Divino or after at least four weeks of acclimatization to a loaner Divino. The RTS was determined utilizing Hearing in Noise Test (HINT) sentences in the R-Space™ system, and subjective benefit was determined utilizing the APHAB. A three-way repeated measures analysis of variance (ANOVA) and a paired samples t-test were utilized to analyze results of the HINT and APHAB, respectively. Results: Results revealed statistically significant differences within microphone (p < 0.001; directional advantage of 3.2 dB), loudspeaker array (p = 0.046; 180° advantage of 1.1 dB), and better ear conditions (p < 0.001; open ear advantage of 4.9 dB). Results from the APHAB revealed statistically and clinically significant benefit for the Divino relative to unaided on the subscales of Ease of Communication (EC) (p = 0.037), Background Noise (BN) (p < 0.001), and Reverberation (RV) (p = 0.005). Conclusions: The Divino's DM provides a statistically significant improvement in speech recognition in noise compared to the OM for subjects with USNHL. Therefore, it is recommended that audiologists consider selecting a Baha with a DM to provide improved speech recognition performance in noisy listening environments.


2010 ◽  
Vol 124 (8) ◽  
pp. 828-834 ◽  
Author(s):  
W Di Nardo ◽  
A Scorpecci ◽  
S Giannantonio ◽  
F Cianfrone ◽  
C Parrilla ◽  
...  

AbstractObjective:To assess the electrode pitch function in a series of adults with postlingually implanted cochlear implants and with contralateral residual hearing, in order to investigate the correlation between the degree of frequency map mismatch and the subjects' speech understanding in quiet and noisy conditions.Design:Case series.Subjects:Seven postlingually deafened adults with cochlear implants, all with detectable contralateral residual hearing. Subjects' electrode pitch function was assessed by means of a pitch-matching test, in which they were asked to match an acoustic pitch (pure tones delivered to the non-implanted ear by an audiometer) to a perceived ‘pitch’ elicited by stimulation of the cochlear implant electrodes. A mismatch score was calculated for each subject. Speech recognition was tested using lists of sentences presented in quiet conditions and at +10, 0 and 5 dB HL signal-to-noise ratio levels (i.e. noise 10 dB HL lower than signal, noise as loud as signal and noise 5 dB HL higher than signal, respectively). Correlations were assessed using a linear regression model, with significance set at p < 0.05.Results:All patients presented some degree of mismatch between the acoustic frequencies assigned to their implant electrodes and the pitch elicited by stimulation of the same electrode, with high between-individual variability. A significant correlation (p < 0.005) was found between mismatch and speech recognition scores at +10 and 0 dB HL signal-to-noise ratio levels (r2 = 0.91 and 0.89, respectively).Conclusion:The mismatch between frequencies allocated to electrodes and the pitch perceived on stimulation of the same electrodes could partially account for our subjects' difficulties with speech understanding in noisy conditions. We suggest that these subjects could benefit from mismatch correction, through a procedure allowing individualised reallocation of frequency bands to electrodes.


2008 ◽  
Vol 18 (1) ◽  
pp. 19-24
Author(s):  
Erin C. Schafer

Children who use cochlear implants experience significant difficulty hearing speech in the presence of background noise, such as in the classroom. To address these difficulties, audiologists often recommend frequency-modulated (FM) systems for children with cochlear implants. The purpose of this article is to examine current empirical research in the area of FM systems and cochlear implants. Discussion topics will include selecting the optimal type of FM receiver, benefits of binaural FM-system input, importance of DAI receiver-gain settings, and effects of speech-processor programming on speech recognition. FM systems significantly improve the signal-to-noise ratio at the child's ear through the use of three types of FM receivers: mounted speakers, desktop speakers, or direct-audio input (DAI). This discussion will aid audiologists in making evidence-based recommendations for children using cochlear implants and FM systems.


2019 ◽  
Vol 30 (08) ◽  
pp. 659-671 ◽  
Author(s):  
Ashley Zaleski-King ◽  
Matthew J. Goupell ◽  
Dragana Barac-Cikoja ◽  
Matthew Bakke

AbstractBilateral inputs should ideally improve sound localization and speech understanding in noise. However, for many bimodal listeners [i.e., individuals using a cochlear implant (CI) with a contralateral hearing aid (HA)], such bilateral benefits are at best, inconsistent. The degree to which clinically available HA and CI devices can function together to preserve interaural time and level differences (ITDs and ILDs, respectively) enough to support the localization of sound sources is a question with important ramifications for speech understanding in complex acoustic environments.To determine if bimodal listeners are sensitive to changes in spatial location in a minimum audible angle (MAA) task.Repeated-measures design.Seven adult bimodal CI users (28–62 years). All listeners reported regular use of digital HA technology in the nonimplanted ear.Seven bimodal listeners were asked to balance the loudness of prerecorded single syllable utterances. The loudness-balanced stimuli were then presented via direct audio inputs of the two devices with an ITD applied. The task of the listener was to determine the perceived difference in processing delay (the interdevice delay [IDD]) between the CI and HA devices. Finally, virtual free-field MAA performance was measured for different spatial locations both with and without inclusion of the IDD correction, which was added with the intent to perceptually synchronize the devices.During the loudness-balancing task, all listeners required increased acoustic input to the HA relative to the CI most comfortable level to achieve equal interaural loudness. During the ITD task, three listeners could perceive changes in intracranial position by distinguishing sounds coming from the left or from the right hemifield; when the CI was delayed by 0.73, 0.67, or 1.7 msec, the signal lateralized from one side to the other. When MAA localization performance was assessed, only three of the seven listeners consistently achieved above-chance performance, even when an IDD correction was included. It is not clear whether the listeners who were able to consistently complete the MAA task did so via binaural comparison or by extracting monaural loudness cues. Four listeners could not perform the MAA task, even though they could have used a monaural loudness cue strategy.These data suggest that sound localization is extremely difficult for most bimodal listeners. This difficulty does not seem to be caused by large loudness imbalances and IDDs. Sound localization is best when performed via a binaural comparison, where frequency-matched inputs convey ITD and ILD information. Although low-frequency acoustic amplification with a HA when combined with a CI may produce an overlapping region of frequency-matched inputs and thus provide an opportunity for binaural comparisons for some bimodal listeners, our study showed that this may not be beneficial or useful for spatial location discrimination tasks. The inability of our listeners to use monaural-level cues to perform the MAA task highlights the difficulty of using a HA and CI together to glean information on the direction of a sound source.


Sign in / Sign up

Export Citation Format

Share Document