AzBio Speech Understanding Performance in Quiet and Noise in High Performing Cochlear Implant Users

2018 ◽  
Vol 39 (5) ◽  
pp. 571-575 ◽  
Author(s):  
Jason A. Brant ◽  
Steven J. Eliades ◽  
Hannah Kaufman ◽  
Jinbo Chen ◽  
Michael J. Ruckenstein
2016 ◽  
Vol 55 (5) ◽  
pp. 295-304 ◽  
Author(s):  
Stefan Zirn ◽  
Daniel Polterauer ◽  
Stefanie Keller ◽  
Werner Hemmert

2018 ◽  
Author(s):  
Eline Verschueren ◽  
Ben Somers ◽  
Tom Francart

ABSTRACTThe speech envelope is essential for speech understanding and can be reconstructed from the electroencephalogram (EEG) recorded while listening to running speech. This so-called neural envelope tracking has been shown to relate to speech understanding in normal hearing listeners, but has barely been investigated in persons wearing cochlear implants (CI). We investigated the relation between speech understanding and neural envelope tracking in CI users.EEG was recorded in 8 CI users while they listened to a story. Speech understanding was varied by changing the intensity of the presented speech. The speech envelope was reconstructed from the EEG using a linear decoder and then correlated with the envelope of the speech stimulus as a measure of neural envelope tracking which was compared to actual speech understanding.This study showed that neural envelope tracking increased with increasing speech understanding in every participant. Furthermore behaviorally measured speech understanding was correlated with participant specific neural envelope tracking results indicating the potential of neural envelope tracking as an objective measure of speech understanding in CI users. This could enable objective and automatic fitting of CIs and pave the way towards closed-loop CIs that adjust continuously and automatically to individual CI users.


2019 ◽  
Vol 13 ◽  
Author(s):  
Ellen Andries ◽  
Vincent Van Rompaey ◽  
Paul Van de Heyning ◽  
Griet Mertens

1984 ◽  
Vol 76 (S1) ◽  
pp. S48-S48
Author(s):  
Ingeborg J. Hochmair‐Desoyer ◽  
Helmut K. Stiglbrunner ◽  
Ernst‐Ludwig Wallenberg

1994 ◽  
Vol 95 (5) ◽  
pp. 2905-2905
Author(s):  
Richard S. Tyler ◽  
Mary Lowder ◽  
George Woodworth ◽  
Aaron Parkinson

2019 ◽  
Vol 12 ◽  
Author(s):  
Jake Hillyer ◽  
Elizabeth Elkins ◽  
Chantel Hazlewood ◽  
Stacey D. Watson ◽  
Julie G. Arenberg ◽  
...  

2019 ◽  
Vol 23 ◽  
pp. 233121651988668 ◽  
Author(s):  
Zilong Xie ◽  
Casey R. Gaskins ◽  
Maureen J. Shader ◽  
Sandra Gordon-Salant ◽  
Samira Anderson ◽  
...  

Aging may limit speech understanding outcomes in cochlear-implant (CI) users. Here, we examined age-related declines in auditory temporal processing as a potential mechanism that underlies speech understanding deficits associated with aging in CI users. Auditory temporal processing was assessed with a categorization task for the words dish and ditch (i.e., identify each token as the word dish or ditch) on a continuum of speech tokens with varying silence duration (0 to 60 ms) prior to the final fricative. In Experiments 1 and 2, younger CI (YCI), middle-aged CI (MCI), and older CI (OCI) users participated in the categorization task across a range of presentation levels (25 to 85 dB). Relative to YCI, OCI required longer silence durations to identify ditch and exhibited reduced ability to distinguish the words dish and ditch (shallower slopes in the categorization function). Critically, we observed age-related performance differences only at higher presentation levels. This contrasted with findings from normal-hearing listeners in Experiment 3 that demonstrated age-related performance differences independent of presentation level. In summary, aging in CI users appears to degrade the ability to utilize brief temporal cues in word identification, particularly at high levels. Age-specific CI programming may potentially improve clinical outcomes for speech understanding performance by older CI listeners.


2019 ◽  
Vol 30 (08) ◽  
pp. 659-671 ◽  
Author(s):  
Ashley Zaleski-King ◽  
Matthew J. Goupell ◽  
Dragana Barac-Cikoja ◽  
Matthew Bakke

AbstractBilateral inputs should ideally improve sound localization and speech understanding in noise. However, for many bimodal listeners [i.e., individuals using a cochlear implant (CI) with a contralateral hearing aid (HA)], such bilateral benefits are at best, inconsistent. The degree to which clinically available HA and CI devices can function together to preserve interaural time and level differences (ITDs and ILDs, respectively) enough to support the localization of sound sources is a question with important ramifications for speech understanding in complex acoustic environments.To determine if bimodal listeners are sensitive to changes in spatial location in a minimum audible angle (MAA) task.Repeated-measures design.Seven adult bimodal CI users (28–62 years). All listeners reported regular use of digital HA technology in the nonimplanted ear.Seven bimodal listeners were asked to balance the loudness of prerecorded single syllable utterances. The loudness-balanced stimuli were then presented via direct audio inputs of the two devices with an ITD applied. The task of the listener was to determine the perceived difference in processing delay (the interdevice delay [IDD]) between the CI and HA devices. Finally, virtual free-field MAA performance was measured for different spatial locations both with and without inclusion of the IDD correction, which was added with the intent to perceptually synchronize the devices.During the loudness-balancing task, all listeners required increased acoustic input to the HA relative to the CI most comfortable level to achieve equal interaural loudness. During the ITD task, three listeners could perceive changes in intracranial position by distinguishing sounds coming from the left or from the right hemifield; when the CI was delayed by 0.73, 0.67, or 1.7 msec, the signal lateralized from one side to the other. When MAA localization performance was assessed, only three of the seven listeners consistently achieved above-chance performance, even when an IDD correction was included. It is not clear whether the listeners who were able to consistently complete the MAA task did so via binaural comparison or by extracting monaural loudness cues. Four listeners could not perform the MAA task, even though they could have used a monaural loudness cue strategy.These data suggest that sound localization is extremely difficult for most bimodal listeners. This difficulty does not seem to be caused by large loudness imbalances and IDDs. Sound localization is best when performed via a binaural comparison, where frequency-matched inputs convey ITD and ILD information. Although low-frequency acoustic amplification with a HA when combined with a CI may produce an overlapping region of frequency-matched inputs and thus provide an opportunity for binaural comparisons for some bimodal listeners, our study showed that this may not be beneficial or useful for spatial location discrimination tasks. The inability of our listeners to use monaural-level cues to perform the MAA task highlights the difficulty of using a HA and CI together to glean information on the direction of a sound source.


Sign in / Sign up

Export Citation Format

Share Document