scholarly journals Spectral peak resolution and speech recognition in quiet: Normal hearing, hearing impaired, and cochlear implant listeners

2005 ◽  
Vol 118 (2) ◽  
pp. 1111-1121 ◽  
Author(s):  
Belinda A. Henry ◽  
Christopher W. Turner ◽  
Amy Behrens
Author(s):  
Zahra Nadimi ◽  
Mansoureh Adel Ghahraman ◽  
Ghassem Mohammadkhani ◽  
Reza Hoseinabadi ◽  
Shohreh Jalaie ◽  
...  

Background and Aim: Vestibular system has several anatomical connections with cognitive regions of the brain. Vestibular disorders have negative effects on cognitive performance. Hearing-impaired patients, particularly cochlear implant users, have concomitant vestibular disor­ders. Previous studies have shown that attention assigned to postural control decreases while per­forming a cognitive task (dual task) in hearing-impaired children. Since the vestibular system and postural control performance develop around 15−16 years of age, the aim of this study was to compare postural control performance during dual task in adolescent boys with normal hearing and cochlear implant (CI) users with congenital hearing-impairment. Methods: Postural control was assessed in twenty 16−19 year old cochlear implant boys and 40 normal hearing peers with force plate. The main outcomes were displacement in posterior- anterior and medial-lateral planes, and mean speed with and without cognitive task and under on/off-device conditions. Caloric test was per­formed for CI users in order to examine the peri­pheral vestibular system. Results: Ninety-five percent of CI users showed caloric weakness. There were no significant diff­erences in postural control parameters between groups. All performances deteriorated in the foam pad condition compared to the hard surface in all groups. Total mean velocity significantly increased during dual task in normal hearing group and in CI users with off-device. Conclusion: Although CI users had apparent vestibular disorders, their postural control in both single and dual-task conditions was identical to the normal peers. These effects can be attributed to the vestibular compensation that takes place during growing. Keywords: Balance; postural control; dual task; congenital hearing loss; cochlear implant


1994 ◽  
Vol 95 (5) ◽  
pp. 2992-2993
Author(s):  
Laurie S. Eisenberg ◽  
Donald D. Dirks ◽  
Theodore S. Bell

2005 ◽  
Vol 48 (4) ◽  
pp. 910-921 ◽  
Author(s):  
Laura E. Dreisbach ◽  
Marjorie R. Leek ◽  
Jennifer J. Lentz

The ability to discriminate the spectral shapes of complex sounds is critical to accurate speech perception. Part of the difficulty experienced by listeners with hearing loss in understanding speech sounds in noise may be related to a smearing of the internal representation of the spectral peaks and valleys because of the loss of sensitivity and an accompanying reduction in frequency resolution. This study examined the discrimination by hearing-impaired listeners of highly similar harmonic complexes with a single spectral peak located in 1 of 3 frequency regions. The minimum level difference between peak and background harmonics required to discriminate a small change in the spectral center of the peak was measured for peaks located near 2, 3, or 4 kHz. Component phases were selected according to an algorithm thought to produce either highly modulated (positive Schroeder) or very flat (negative Schroeder) internal waveform envelopes in the cochlea. The mean amplitude difference between a spectral peak and the background components required for discrimination of pairs of harmonic complexes (spectral contrast threshold) was from 4 to 19 dB greater for listeners with hearing impairment than for a control group of listeners with normal hearing. In normal-hearing listeners, improvements in threshold were seen with increasing stimulus level, and there was a strong effect of stimulus phase, as the positive Schroeder stimuli always produced lower thresholds than the negative Schroeder stimuli. The listeners with hearing loss showed no consistent spectral contrast effects due to stimulus phase and also showed little improvement with increasing stimulus level, once their sensitivity loss was overcome. The lack of phase and level effects may be a result of the more linear processing occurring in impaired ears, producing poorer-than-normal frequency resolution, a loss of gain for low amplitudes, and an altered cochlear phase characteristic in regions of damage.


2020 ◽  
Author(s):  
Luuk P.H. van de Rijt ◽  
A. John van Opstal ◽  
Marc M. van Wanrooij

AbstractThe cochlear implant (CI) allows profoundly deaf individuals to partially recover hearing. Still, due to the coarse acoustic information provided by the implant, CI users have considerable difficulties in recognizing speech, especially in noisy environments, even years after implantation. CI users therefore rely heavily on visual cues to augment speech comprehension, more so than normal-hearing individuals. However, it is unknown how attention to one (focused) or both (divided) modalities plays a role in multisensory speech recognition. Here we show that unisensory speech listening and speech reading were negatively impacted in divided-attention tasks for CI users - but not for normal-hearing individuals. Our psychophysical experiments revealed that, as expected, listening thresholds were consistently better for the normal-hearing, while lipreading thresholds were largely similar for the two groups. Moreover, audiovisual speech recognition for normal-hearing individuals could be described well by probabilistic summation of auditory and visual speech recognition, while CI users were better integrators than expected from statistical facilitation alone. Our results suggest that this benefit in integration, however, comes at a cost. Unisensory speech recognition is degraded for CI users when attention needs to be divided across modalities, i.e. in situations with uncertainty about the upcoming stimulus modality. We conjecture that CI users exhibit an integration-attention trade-off. They focus solely on a single modality during focused-attention tasks, but need to divide their limited attentional resources to more modalities during divided-attention tasks. We argue that in order to determine the benefit of a CI for speech comprehension, situational factors need to be discounted by presenting speech in realistic or complex audiovisual environments.Significance statementDeaf individuals using a cochlear implant require significant amounts of effort to listen in noisy environments due to their impoverished hearing. Lipreading can benefit them and reduce the burden of listening by providing an additional source of information. Here we show that the improved speech recognition for audiovisual stimulation comes at a cost, however, as the cochlear-implant users now need to listen and speech-read simultaneously, paying attention to both modalities. The data suggests that cochlear-implant users run into the limits of their attentional resources, and we argue that they, unlike normal-hearing individuals, always need to consider whether a multisensory benefit outweighs the unisensory cost in everyday environments.


2000 ◽  
Vol 108 (5) ◽  
pp. 2377-2387 ◽  
Author(s):  
Philipos C. Loizou ◽  
Michael Dorman ◽  
Oguz Poroy ◽  
Tony Spahr

2016 ◽  
Vol 30 (3) ◽  
pp. 340-344 ◽  
Author(s):  
Narges Jafari ◽  
Michael Drinnan ◽  
Reyhane Mohamadi ◽  
Fariba Yadegari ◽  
Mandana Nourbakhsh ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document