scholarly journals Speech Understanding in Complex Listening Environments by Listeners Fit With Cochlear Implants

2017 ◽  
Vol 60 (10) ◽  
pp. 3019-3026 ◽  
Author(s):  
Michael F. Dorman ◽  
Rene H. Gifford

PurposeThe aim of this article is to summarize recent published and unpublished research from our 2 laboratories on improving speech understanding in complex listening environments by listeners fit with cochlear implants (CIs).MethodCI listeners were tested in 2 listening environments. One was a simulation of a restaurant with multiple, diffuse noise sources, and the other was a cocktail party with 2 spatially separated point sources of competing speech. At issue was the value of the following sources of information, or interventions, on speech understanding: (a) visual information, (b) adaptive beamformer microphones and remote microphones, (c) bimodal fittings, that is, a CI and contralateral low-frequency acoustic hearing, (d) hearing preservation fittings, that is, a CI with preserved low-frequency acoustic in the same ear plus low-frequency acoustic hearing in the contralateral ear, and (e) bilateral CIs.ResultsA remote microphone provided the largest improvement in speech understanding. Visual information and adaptive beamformers ranked next, while bimodal fittings, bilateral fittings, and hearing preservation provided significant but less benefit than the other interventions or sources of information. Only bilateral CIs allowed listeners high levels of speech understanding when signals were roved over the frontal plane.ConclusionsThe evidence supports the use of bilateral CIs and hearing preservation surgery for best speech understanding in complex environments. These fittings, when combined with visual information and microphone technology, should lead to high levels of speech understanding by CI patients in complex listening environments.Presentation Videohttp://cred.pubs.asha.org/article.aspx?articleid=2601622

2012 ◽  
Vol 23 (06) ◽  
pp. 385-395 ◽  
Author(s):  
Michael F. Dorman ◽  
Anthony Spahr ◽  
Rene H. Gifford ◽  
Sarah Cook ◽  
Ting Zhang ◽  
...  

In this article we review, and discuss the clinical implications of, five projects currently underway in the Cochlear Implant Laboratory at Arizona State University. The projects are (1) norming the AzBio sentence test, (2) comparing the performance of bilateral and bimodal cochlear implant (CI) patients in realistic listening environments, (3) accounting for the benefit provided to bimodal patients by low-frequency acoustic stimulation, (4) assessing localization by bilateral hearing aid patients and the implications of that work for hearing preservation patients, and (5) studying heart rate variability as a possible measure for quantifying the stress of listening via an implant.The long-term goals of the laboratory are to improve the performance of patients fit with cochlear implants and to understand the mechanisms, physiological or electronic, that underlie changes in performance. We began our work with cochlear implant patients in the mid-1980s and received our first grant from the National Institutes of Health (NIH) for work with implanted patients in 1989. Since that date our work with cochlear implant patients has been funded continuously by the NIH. In this report we describe some of the research currently being conducted in our laboratory.


2015 ◽  
Vol 20 (3) ◽  
pp. 166-171 ◽  
Author(s):  
Louise H. Loiselle ◽  
Michael F. Dorman ◽  
William A. Yost ◽  
René H. Gifford

The aim of this article was to study sound source localization by cochlear implant (CI) listeners with low-frequency (LF) acoustic hearing in both the operated ear and in the contralateral ear. Eight CI listeners had symmetrical LF acoustic hearing and 4 had asymmetrical LF acoustic hearing. The effects of two variables were assessed: (i) the symmetry of the LF thresholds in the two ears and (ii) the presence/absence of bilateral acoustic amplification. Stimuli consisted of low-pass, high-pass, and wideband noise bursts presented in the frontal horizontal plane. Localization accuracy was 23° of error for the symmetrical listeners and 76° of error for the asymmetrical listeners. The presence of a unilateral CI used in conjunction with bilateral LF acoustic hearing does not impair sound source localization accuracy, but amplification for acoustic hearing can be detrimental to sound source localization accuracy.


2017 ◽  
Vol 2 (6) ◽  
pp. 54-63
Author(s):  
Conor Kelly ◽  
Lina A. J. Reiss

Hearing preservation cochlear implants (CIs) are specifically designed to preserve residual low-frequency acoustic hearing for use together with electrically stimulated high-frequency hearing. This combined electro-acoustic stimulation (EAS) provides a promising treatment option for patients with severe high-frequency hearing loss, but with some residual low-frequency hearing, and has been shown to improve speech perception, especially in background noise, music perception, and sound source localization. Thus, preservation of residual hearing should be a priority in treatment. Although residual low-frequency hearing is successfully preserved to varying degrees in many patients, some patients experience a loss of residual hearing following implantation. A wide range of potential causes of, or contributors to, loss of residual hearing in EAS CI users have been proposed. In this paper, we review the evidence for several of the proposed causes of hearing loss with EAS CI. We conclude that its etiology is likely a multifactorial, heterogeneous phenomenon. Furthermore, we suggest that studies to further elucidate effects of ischemia on lateral wall function and maintenance of endocochlear potential in the context of EAS CI implantation and use are needed.


2013 ◽  
Vol 34 (2) ◽  
pp. 245-248 ◽  
Author(s):  
Michael F. Dorman ◽  
Anthony J. Spahr ◽  
Louise Loiselle ◽  
Ting Zhang ◽  
Sarah Cook ◽  
...  

2016 ◽  
Vol 59 (6) ◽  
pp. 1505-1519 ◽  
Author(s):  
Michael F. Dorman ◽  
Julie Liss ◽  
Shuai Wang ◽  
Visar Berisha ◽  
Cimarron Ludwig ◽  
...  

Purpose Five experiments probed auditory-visual (AV) understanding of sentences by users of cochlear implants (CIs). Method Sentence material was presented in auditory (A), visual (V), and AV test conditions to listeners with normal hearing and CI users. Results (a) Most CI users report that most of the time, they have access to both A and V information when listening to speech. (b) CI users did not achieve better scores on a task of speechreading than did listeners with normal hearing. (c) Sentences that are easy to speechread provided 12 percentage points more gain to speech understanding than did sentences that were difficult. (d) Ease of speechreading for sentences is related to phrase familiarity. (e) Users of bimodal CIs benefit from low-frequency acoustic hearing even when V cues are available, and a second CI adds to the benefit of a single CI when V cues are available. (f) V information facilitates lexical segmentation by improving the recognition of the number of syllables produced and the relative strength of these syllables. Conclusions Our data are consistent with the view that V information improves CI users' ability to identify syllables in the acoustic stream and to recognize their relative juxtaposed strengths. Enhanced syllable resolution allows better identification of word onsets, which, when combined with place-of-articulation information from visible consonants, improves lexical access.


2017 ◽  
Vol 60 (8) ◽  
pp. 2360-2363 ◽  
Author(s):  
Michael F. Dorman ◽  
Sarah Natale ◽  
Anthony Spahr ◽  
Erin Castioni

Purpose The aim of this experiment was to compare, for patients with cochlear implants (CIs), the improvement for speech understanding in noise provided by a monaural adaptive beamformer and for two interventions that produced bilateral input (i.e., bilateral CIs and hearing preservation [HP] surgery). Method Speech understanding scores for sentences were obtained for 10 listeners fit with a single CI. The listeners were tested with and without beamformer activated in a “cocktail party” environment with spatially separated target and maskers. Data for 10 listeners with bilateral CIs and 8 listeners with HP CIs were taken from Loiselle, Dorman, Yost, Cook, and Gifford (2016), who used the same test protocol. Results The use of the beamformer resulted in a 31 percentage point improvement in performance; in bilateral CIs, an 18 percentage point improvement; and in HP CIs, a 20 percentage point improvement. Conclusion A monaural adaptive beamformer can produce an improvement in speech understanding in a complex noise environment that is equal to, or greater than, the improvement produced by bilateral CIs and HP surgery.


2019 ◽  
Vol 30 (10) ◽  
pp. 918-926 ◽  
Author(s):  
Ashley M. Nassiri ◽  
Robert J. Yawn ◽  
René H. Gifford ◽  
David S. Haynes ◽  
Jillian B. Roberts ◽  
...  

AbstractIn current practice, the status of residual low-frequency acoustic hearing in hearing preservation cochlear implantation (CI) is unknown until activation two to three weeks postoperatively. The intraoperatively measured electrically evoked compound action potential (ECAP), a synchronous response from electrically stimulated auditory nerve fibers, is one of the first markers of auditory nerve function after cochlear implant surgery and such may provide information regarding the status of residual low-frequency acoustic hearing.This study aimed to evaluate the relationship between intraoperative ECAP at the time of CI and presence of preoperative and postoperative low-frequency acoustic hearing.A retrospective case review.Two hundred seventeen adult ears receiving CI (42 Advanced Bionics, 82 Cochlear, and 93 MED-EL implants).Intraoperative ECAP and CI.ECAP measurements were obtained intraoperatively, whereas residual hearing data were obtained from postoperative CI activation audiogram. A linear mixed model test revealed no interaction effects for the following variables: manufacturer, electrode location (basal, middle, and apical), preoperative low-frequency pure-tone average (LFPTA), and postoperative LFPTA. The postoperative residual low-frequency hearing status was defined as preservation of unaided air conduction thresholds ≤90 dB at 250 Hz. Electrode location and hearing preservation data were analyzed individually for both the ECAP threshold and ECAP maximum amplitude using multiple t-tests, without assuming a consistent standard deviation between the groups, and with alpha correction.The maximum amplitude, in microvolts, was significantly higher throughout apical and middle regions of the cochlea in patients who had preserved low-frequency acoustic hearing as compared with those who did not have preserved hearing (p = 0.0001 and p = 0.0088, respectively). ECAP threshold, in microamperes, was significantly lower throughout the apical region of the cochlea in patients with preserved low-frequency acoustic hearing as compared with those without preserved hearing (p = 0.0099). Basal electrode maximum amplitudes and middle and basal electrode thresholds were not significantly correlated with postoperative low-frequency hearing.Apical and middle electrode maximum amplitudes and apical electrode thresholds detected through intraoperative ECAP measurements are significantly correlated with preservation of low-frequency acoustic hearing. This association may represent a potential immediate feedback mechanism for postoperative outcomes that can be applied to all CIs.


2020 ◽  
Vol 31 (07) ◽  
pp. 547-550
Author(s):  
Michael F. Dorman ◽  
Sarah Natale ◽  
Alissa Knickerbocker

Abstract Background Previous research has found that when the location of a talker was varied and an auditory prompt indicated the location of the talker, the addition of visual information produced a significant and large improvement in speech understanding for listeners with bilateral cochlear implants (CIs) but not with a unilateral CI. Presumably, the sound-source localization ability of the bilateral CI listeners allowed them to orient to the auditory prompt and benefit from visual information for the subsequent target sentence. Purpose The goal of this project was to assess the robustness of previous research by using a different test environment, a different CI, different test material, and a different response measure. Research Design Nine listeners fit with bilateral CIs were tested in a simulation of a crowded restaurant. Auditory–visual (AV) sentence material was presented from loudspeakers and video monitors at 0, +90, and −90 degrees. Each trial started with the presentation of an auditory alerting phrase from one of the three target loudspeakers followed by an AV target sentence from that loudspeaker/monitor. On each trial, the two nontarget monitors showed the speaker mouthing a different sentence. Sentences were presented in noise in four test conditions: one CI, one CI plus vision, bilateral CIs, and bilateral CIs plus vision. Results Mean percent words correct for the four test conditions were: one CI, 43%; bilateral CI, 60%; one CI plus vision, 52%; and bilateral CI plus vision, 84%. Visual information did not significantly improve performance in the single CI conditions but did improve performance in the bilateral CI conditions. The magnitude of improvement for two CIs versus one CI in the AV condition was approximately twice that for two CIs versus one CI in the auditory condition. Conclusions Our results are consistent with previous data showing the large value of bilateral implants in a complex AV listening environment. The results indicate that the value of bilateral CIs for speech understanding is significantly underestimated in standard, auditory-only, single-speaker, test environments.


Author(s):  
Ian Christopher Calloway

Prior studies suggest that listeners are more likely to categorize a sibilant ranging acoustically from [∫] to [s] as /s/ if provided auditory or visual information about the speaker that suggests male gender. Social cognition can also be affected by experimentally induced differences in power. A powerful individual’s impression of another tends to show greater consistency with the other person’s broad social category, while a powerless individual’s impression is more consistent with the specific pieces of information provided about the other person. This study investigated whether sibilant categorization would be influenced by power when the listener is presented with inconsistent sources of information about speaker gender. Participants were experimentally primed for behavior consistent with powerful or powerless individuals. They then completed a forced choice identification task: They saw a visual stimulus (a male or female face) and categorized an auditory stimulus (ranging from ‘shy’ to ‘sigh’) as /∫/ or /s/. As expected, participants primed for high power were sensitive to a single cue to gender, while those who received the low power prime were sensitive to both, even if the cues did not match. This result suggests that variability in listener power may cause systematic differences in phonetic perception.


2016 ◽  
Vol 59 (4) ◽  
pp. 810-818 ◽  
Author(s):  
Louise H. Loiselle ◽  
Michael F. Dorman ◽  
William A. Yost ◽  
Sarah J. Cook ◽  
Rene H. Gifford

PurposeTo assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs.MethodsEleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli.ResultsSound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available.ConclusionsThe findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.


Sign in / Sign up

Export Citation Format

Share Document