Contribution of frequency compressed temporal fine structure cues to the speech recognition in noise: An implication in cochlear implant signal processing

2022 ◽  
Vol 189 ◽  
pp. 108616
Author(s):  
Venkateswarlu Poluboina ◽  
Aparna Pulikala ◽  
Arivudai Nambi Pitchai Muthu
2021 ◽  
Vol 32 (08) ◽  
pp. 478-486
Author(s):  
Lisa G. Potts ◽  
Soo Jang ◽  
Cory L. Hillis

Abstract Background For cochlear implant (CI) recipients, speech recognition in noise is consistently poorer compared with recognition in quiet. Directional processing improves performance in noise and can be automatically activated based on acoustic scene analysis. The use of adaptive directionality with CI recipients is new and has not been investigated thoroughly, especially utilizing the recipients' preferred everyday signal processing, dynamic range, and/or noise reduction. Purpose This study utilized CI recipients' preferred everyday signal processing to evaluate four directional microphone options in a noisy environment to determine which option provides the best speech recognition in noise. A greater understanding of automatic directionality could ultimately improve CI recipients' speech-in-noise performance and better guide clinicians in programming. Study Sample Twenty-six unilateral and seven bilateral CI recipients with a mean age of 66 years and approximately 4 years of CI experience were included. Data Collection and Analysis Speech-in-noise performance was measured using eight loudspeakers in a 360-degree array with HINT sentences presented in restaurant noise. Four directional options were evaluated (automatic [SCAN], adaptive [Beam], fixed [Zoom], and Omni-directional) with participants' everyday use signal processing options active. A mixed-model analysis of variance (ANOVA) and pairwise comparisons were performed. Results Automatic directionality (SCAN) resulted in the best speech-in-noise performance, although not significantly better than Beam. Omni-directional performance was significantly poorer compared with the three other directional options. A varied number of participants performed their best with each of the four-directional options, with 16 performing best with automatic directionality. The majority of participants did not perform best with their everyday directional option. Conclusion The individual variability seen in this study suggests that CI recipients try with different directional options to find their ideal program. However, based on a CI recipient's motivation to try different programs, automatic directionality is an appropriate everyday processing option.


2020 ◽  
Vol 21 (6) ◽  
pp. 527-544
Author(s):  
H. C. Stronks ◽  
J. J. Briaire ◽  
J. H. M. Frijns

Abstract Cochlear implant (CI) users have more difficulty understanding speech in temporally modulated noise than in steady-state (SS) noise. This is thought to be caused by the limited low-frequency information that CIs provide, as well as by the envelope coding in CIs that discards the temporal fine structure (TFS). Contralateral amplification with a hearing aid, referred to as bimodal hearing, can potentially provide CI users with TFS cues to complement the envelope cues provided by the CI signal. In this study, we investigated whether the use of a CI alone provides access to only envelope cues and whether acoustic amplification can provide additional access to TFS cues. To this end, we evaluated speech recognition in bimodal listeners, using SS noise and two amplitude-modulated noise types, namely babble noise and amplitude-modulated steady-state (AMSS) noise. We hypothesized that speech recognition in noise depends on the envelope of the noise, but not on its TFS when listening with a CI. Secondly, we hypothesized that the amount of benefit gained by the addition of a contralateral hearing aid depends on both the envelope and TFS of the noise. The two amplitude-modulated noise types decreased speech recognition more effectively than SS noise. Against expectations, however, we found that babble noise decreased speech recognition more effectively than AMSS noise in the CI-only condition. Therefore, we rejected our hypothesis that TFS is not available to CI users. In line with expectations, we found that the bimodal benefit was highest in babble noise. However, there was no significant difference between the bimodal benefit obtained in SS and AMSS noise. Our results suggest that a CI alone can provide TFS cues and that bimodal benefits in noise depend on TFS, but not on the envelope of the noise.


2012 ◽  
Vol 132 (11) ◽  
pp. 1183-1191 ◽  
Author(s):  
Beier Qi ◽  
Andreas Krenmayr ◽  
Ning Zhang ◽  
Ruijuan Dong ◽  
Xueqing Chen ◽  
...  

2013 ◽  
Vol 133 (5) ◽  
pp. 3380-3380 ◽  
Author(s):  
Frederic Apoux ◽  
Carla L. Youngdahl ◽  
Sarah E. Yoho ◽  
Eric W. Healy

2004 ◽  
Vol 115 (4) ◽  
pp. 1729-1735 ◽  
Author(s):  
Christopher W. Turner ◽  
Bruce J. Gantz ◽  
Corina Vidal ◽  
Amy Behrens ◽  
Belinda A. Henry

Sign in / Sign up

Export Citation Format

Share Document