scholarly journals Bilateral Cochlear Implants Allow Listeners to Benefit from Visual Information When Talker Location is Varied

2020 ◽  
Vol 31 (07) ◽  
pp. 547-550
Author(s):  
Michael F. Dorman ◽  
Sarah Natale ◽  
Alissa Knickerbocker

Abstract Background Previous research has found that when the location of a talker was varied and an auditory prompt indicated the location of the talker, the addition of visual information produced a significant and large improvement in speech understanding for listeners with bilateral cochlear implants (CIs) but not with a unilateral CI. Presumably, the sound-source localization ability of the bilateral CI listeners allowed them to orient to the auditory prompt and benefit from visual information for the subsequent target sentence. Purpose The goal of this project was to assess the robustness of previous research by using a different test environment, a different CI, different test material, and a different response measure. Research Design Nine listeners fit with bilateral CIs were tested in a simulation of a crowded restaurant. Auditory–visual (AV) sentence material was presented from loudspeakers and video monitors at 0, +90, and −90 degrees. Each trial started with the presentation of an auditory alerting phrase from one of the three target loudspeakers followed by an AV target sentence from that loudspeaker/monitor. On each trial, the two nontarget monitors showed the speaker mouthing a different sentence. Sentences were presented in noise in four test conditions: one CI, one CI plus vision, bilateral CIs, and bilateral CIs plus vision. Results Mean percent words correct for the four test conditions were: one CI, 43%; bilateral CI, 60%; one CI plus vision, 52%; and bilateral CI plus vision, 84%. Visual information did not significantly improve performance in the single CI conditions but did improve performance in the bilateral CI conditions. The magnitude of improvement for two CIs versus one CI in the AV condition was approximately twice that for two CIs versus one CI in the auditory condition. Conclusions Our results are consistent with previous data showing the large value of bilateral implants in a complex AV listening environment. The results indicate that the value of bilateral CIs for speech understanding is significantly underestimated in standard, auditory-only, single-speaker, test environments.

2016 ◽  
Vol 59 (6) ◽  
pp. 1505-1519 ◽  
Author(s):  
Michael F. Dorman ◽  
Julie Liss ◽  
Shuai Wang ◽  
Visar Berisha ◽  
Cimarron Ludwig ◽  
...  

Purpose Five experiments probed auditory-visual (AV) understanding of sentences by users of cochlear implants (CIs). Method Sentence material was presented in auditory (A), visual (V), and AV test conditions to listeners with normal hearing and CI users. Results (a) Most CI users report that most of the time, they have access to both A and V information when listening to speech. (b) CI users did not achieve better scores on a task of speechreading than did listeners with normal hearing. (c) Sentences that are easy to speechread provided 12 percentage points more gain to speech understanding than did sentences that were difficult. (d) Ease of speechreading for sentences is related to phrase familiarity. (e) Users of bimodal CIs benefit from low-frequency acoustic hearing even when V cues are available, and a second CI adds to the benefit of a single CI when V cues are available. (f) V information facilitates lexical segmentation by improving the recognition of the number of syllables produced and the relative strength of these syllables. Conclusions Our data are consistent with the view that V information improves CI users' ability to identify syllables in the acoustic stream and to recognize their relative juxtaposed strengths. Enhanced syllable resolution allows better identification of word onsets, which, when combined with place-of-articulation information from visible consonants, improves lexical access.


Author(s):  
Michael F. Dorman ◽  
Sarah Cook Natale ◽  
Smita Agrawal

Abstract Background Both the Roger remote microphone and on-ear, adaptive beamforming technologies (e.g., Phonak UltraZoom) have been shown to improve speech understanding in noise for cochlear implant (CI) listeners when tested in audio-only (A-only) test environments. Purpose Our aim was to determine if adult and pediatric CI recipients benefited from these technologies in a more common environment—one in which both audio and visual cues were available and when overall performance was high. Study Sample Ten adult CI listeners (Experiment 1) and seven pediatric CI listeners (Experiment 2) were tested. Design Adults were tested in quiet and in two levels of noise (level 1 and level 2) in A-only and audio-visual (AV) environments. There were four device conditions: (1) an ear canal-level, omnidirectional microphone (T-mic) in quiet, (2) the T-mic in noise, (3) an adaptive directional mic (UltraZoom) in noise, and (4) a wireless, remote mic (Roger Pen) in noise. Pediatric listeners were tested in quiet and in level 1 noise in A-only and AV environments. The test conditions were: (1) a behind-the-ear level omnidirectional mic (processor mic) in quiet, (2) the processor mic in noise, (3) the T-mic in noise, and (4) the Roger Pen in noise. Data Collection and Analyses In each test condition, sentence understanding was assessed (percent correct) and ease of listening ratings were obtained. The sentence understanding data were entered into repeated-measures analyses of variance. Results For both adult and pediatric listeners in the AV test conditions in level 1 noise, performance with the Roger Pen was significantly higher than with the T-mic. For both populations, performance in level 1 noise with the Roger Pen approached the level of baseline performance in quiet. Ease of listening in noise was rated higher in the Roger Pen conditions than in the T-mic or processor mic conditions in both A-only and AV test conditions. Conclusion The Roger remote mic and on-ear directional mic technologies benefit both speech understanding and ease of listening in a realistic laboratory test environment and are likely do the same in real-world listening environments.


2017 ◽  
Vol 60 (10) ◽  
pp. 3019-3026 ◽  
Author(s):  
Michael F. Dorman ◽  
Rene H. Gifford

PurposeThe aim of this article is to summarize recent published and unpublished research from our 2 laboratories on improving speech understanding in complex listening environments by listeners fit with cochlear implants (CIs).MethodCI listeners were tested in 2 listening environments. One was a simulation of a restaurant with multiple, diffuse noise sources, and the other was a cocktail party with 2 spatially separated point sources of competing speech. At issue was the value of the following sources of information, or interventions, on speech understanding: (a) visual information, (b) adaptive beamformer microphones and remote microphones, (c) bimodal fittings, that is, a CI and contralateral low-frequency acoustic hearing, (d) hearing preservation fittings, that is, a CI with preserved low-frequency acoustic in the same ear plus low-frequency acoustic hearing in the contralateral ear, and (e) bilateral CIs.ResultsA remote microphone provided the largest improvement in speech understanding. Visual information and adaptive beamformers ranked next, while bimodal fittings, bilateral fittings, and hearing preservation provided significant but less benefit than the other interventions or sources of information. Only bilateral CIs allowed listeners high levels of speech understanding when signals were roved over the frontal plane.ConclusionsThe evidence supports the use of bilateral CIs and hearing preservation surgery for best speech understanding in complex environments. These fittings, when combined with visual information and microphone technology, should lead to high levels of speech understanding by CI patients in complex listening environments.Presentation Videohttp://cred.pubs.asha.org/article.aspx?articleid=2601622


2013 ◽  
Vol 34 (2) ◽  
pp. 245-248 ◽  
Author(s):  
Michael F. Dorman ◽  
Anthony J. Spahr ◽  
Louise Loiselle ◽  
Ting Zhang ◽  
Sarah Cook ◽  
...  

2018 ◽  
Vol 61 (3) ◽  
pp. 752-761 ◽  
Author(s):  
Timothy J. Davis ◽  
René H. Gifford

PurposeThe primary purpose of this study was to derive spatial release from masking (SRM) performance-azimuth functions for bilateral cochlear implant (CI) users to provide a thorough description of SRM as a function of target/distracter spatial configuration. The secondary purpose of this study was to investigate the effect of the microphone location for SRM in a within-subject study design.MethodSpeech recognition was measured in 12 adults with bilateral CIs for 11 spatial separations ranging from −90° to +90° in 20° steps using an adaptive block design. Five of the 12 participants were tested with both the behind-the-ear microphones and a T-mic configuration to further investigate the effect of mic location on SRM.ResultsSRM can be significantly affected by the hemifield origin of the distracter stimulus—particularly for listeners with interaural asymmetry in speech understanding. The greatest SRM was observed with a distracter positioned 50° away from the target. There was no effect of mic location on SRM for the current experimental design.ConclusionOur results demonstrate that the traditional assessment of SRM with a distracter positioned at 90° azimuth may underestimate maximum performance for individuals with bilateral CIs.


2020 ◽  
Vol 63 (11) ◽  
pp. 3855-3864
Author(s):  
Wanting Huang ◽  
Lena L. N. Wong ◽  
Fei Chen ◽  
Haihong Liu ◽  
Wei Liang

Purpose Fundamental frequency (F0) is the primary acoustic cue for lexical tone perception in tonal languages but is processed in a limited way in cochlear implant (CI) systems. The aim of this study was to evaluate the importance of F0 contours in sentence recognition in Mandarin-speaking children with CIs and find out whether it is similar to/different from that in age-matched normal-hearing (NH) peers. Method Age-appropriate sentences, with F0 contours manipulated to be either natural or flattened, were randomly presented to preschool children with CIs and their age-matched peers with NH under three test conditions: in quiet, in white noise, and with competing sentences at 0 dB signal-to-noise ratio. Results The neutralization of F0 contours resulted in a significant reduction in sentence recognition. While this was seen only in noise conditions among NH children, it was observed throughout all test conditions among children with CIs. Moreover, the F0 contour-induced accuracy reduction ratios (i.e., the reduction in sentence recognition resulting from the neutralization of F0 contours compared to the normal F0 condition) were significantly greater in children with CIs than in NH children in all test conditions. Conclusions F0 contours play a major role in sentence recognition in both quiet and noise among pediatric implantees, and the contribution of the F0 contour is even more salient than that in age-matched NH children. These results also suggest that there may be differences between children with CIs and NH children in how F0 contours are processed.


ASHA Leader ◽  
2010 ◽  
Vol 15 (2) ◽  
pp. 14-17 ◽  
Author(s):  
Ruth Litovsky

Sign in / Sign up

Export Citation Format

Share Document