scholarly journals KanNon System - Consonant Recognition for Phonemes

Author(s):  
K. Nakamura ◽  
H. Murata ◽  
T. Sagara ◽  
Y. Kubo ◽  
S. Sugimoto
1991 ◽  
Vol 34 (2) ◽  
pp. 415-426 ◽  
Author(s):  
Richard L. Freyman ◽  
G. Patrick Nerbonne ◽  
Heather A. Cote

This investigation examined the degree to which modification of the consonant-vowel (C-V) intensity ratio affected consonant recognition under conditions in which listeners were forced to rely more heavily on waveform envelope cues than on spectral cues. The stimuli were 22 vowel-consonant-vowel utterances, which had been mixed at six different signal-to-noise ratios with white noise that had been modulated by the speech waveform envelope. The resulting waveforms preserved the gross speech envelope shape, but spectral cues were limited by the white-noise masking. In a second stimulus set, the consonant portion of each utterance was amplified by 10 dB. Sixteen subjects with normal hearing listened to the unmodified stimuli, and 16 listened to the amplified-consonant stimuli. Recognition performance was reduced in the amplified-consonant condition for some consonants, presumably because waveform envelope cues had been distorted. However, for other consonants, especially the voiced stops, consonant amplification improved recognition. Patterns of errors were altered for several consonant groups, including some that showed only small changes in recognition scores. The results indicate that when spectral cues are compromised, nonlinear amplification can alter waveform envelope cues for consonant recognition.


2009 ◽  
Vol 126 (5) ◽  
pp. 2683-2694 ◽  
Author(s):  
Sandeep A. Phatak ◽  
Yang-soo Yoon ◽  
David M. Gooler ◽  
Jont B. Allen

1998 ◽  
Vol 103 (2) ◽  
pp. 1098-1114 ◽  
Author(s):  
Elizabeth Kennedy ◽  
Harry Levitt ◽  
Arlene C. Neuman ◽  
Mark Weiss

2021 ◽  
Vol 32 (08) ◽  
pp. 521-527
Author(s):  
Yang-Soo Yoon ◽  
George Whitaker ◽  
Yune S. Lee

Abstract Background Cochlear implant technology allows for acoustic and electric stimulations to be combined across ears (bimodal) and within the same ear (electric acoustic stimulation [EAS]). Mechanisms used to integrate speech acoustics may be different between the bimodal and EAS hearing, and the configurations of hearing loss might be an important factor for the integration. Thus, differentiating the effects of different configurations of hearing loss on bimodal or EAS benefit in speech perception (differences in performance with combined acoustic and electric stimulations from a better stimulation alone) is important. Purpose Using acoustic simulation, we determined how consonant recognition was affected by different configurations of hearing loss in bimodal and EAS hearing. Research Design A mixed design was used with one between-subject variable (simulated bimodal group vs. simulated EAS group) and one within-subject variable (acoustic stimulation alone, electric stimulation alone, and combined acoustic and electric stimulations). Study Sample Twenty adult subjects (10 for each group) with normal hearing were recruited. Data Collection and Analysis Consonant perception was unilaterally or bilaterally measured in quiet. For the acoustic stimulation, four different simulations of hearing loss were created by band-pass filtering consonants with a fixed lower cutoff frequency of 100 Hz and each of the four upper cutoff frequencies of 250, 500, 750, and 1,000 Hz. For the electric stimulation, an eight-channel noise vocoder was used to generate a typical spectral mismatch by using fixed input (200–7,000 Hz) and output (1,000–7,000 Hz) frequency ranges. The effects of simulated hearing loss on consonant recognition were compared between the two groups. Results Significant bimodal and EAS benefits occurred regardless of the configurations of hearing loss and hearing technology (bimodal vs. EAS). Place information was better transmitted in EAS hearing than in bimodal hearing. Conclusion These results suggest that configurations of hearing loss are not a significant factor for integrating consonant information between acoustic and electric stimulations. The results also suggest that mechanisms used to integrate consonant information may be similar between bimodal and EAS hearing.


1987 ◽  
Vol 82 (4) ◽  
pp. 1152-1161 ◽  
Author(s):  
Dianne J. Van Tasell ◽  
Sigfrid D. Soli ◽  
Virginia M. Kirby ◽  
Gregory P. Widin

Sign in / Sign up

Export Citation Format

Share Document