Effect of relative amplitude, presentation level, and vowel duration on perception of voiceless stop consonants by normal and hearing‐impaired listeners

1996 ◽  
Vol 100 (5) ◽  
pp. 3398-3407 ◽  
Author(s):  
Mark S. Hedrick ◽  
Walt Jesteadt
2021 ◽  
pp. 026765832110089
Author(s):  
Daniel J Olson

Featural approaches to second language phonetic acquisition posit that the development of new phonetic norms relies on sub-phonemic features, expressed through a constellation of articulatory gestures and their corresponding acoustic cues, which may be shared across multiple phonemes. Within featural approaches, largely supported by research in speech perception, debate remains as to the fundamental scope or ‘size’ of featural units. The current study examines potential featural relationships between voiceless and voiced stop consonants, as expressed through the voice onset time cue. Native English-speaking learners of Spanish received targeted training on Spanish voiceless stop consonant production through a visual feedback paradigm. Analysis focused on the change in voice onset time, for both voiceless (i.e. trained) and voiced (i.e. non-trained) phonemes, across the pretest, posttest, and delayed posttest. The results demonstrated a significant improvement (i.e. reduction) in voice onset time for voiceless stops, which were subject to the training paradigm. In contrast, there was no significant change in the non-trained voiced stop consonants. These results suggest a limited featural relationship, with independent voice onset time (VOT) cues for voiceless and voices phonemes. Possible underlying mechanisms that limit feature generalization in second language (L2) phonetic production, including gestural considerations and acoustic similarity, are discussed.


1985 ◽  
Vol 50 (2) ◽  
pp. 126-131 ◽  
Author(s):  
Anna K. Nabelek ◽  
Tomasz R. Letowski

The effects of reverberation on the perception of vowels and diphthongs were evaluated using 10 subjects with moderate sensorineural hearing losses. Stimuli were 15 English vowels and diphthongs, spoken between/b/and/t/and recorded in a carrier sentence. The test was recorded without and with reverberation (T = 1.2 s). Although vowel confusions occurred in both test conditions, the number of vowels and diphthongs affected and the total number of errors made were significantly greater under the reverberant condition. The results indicated that the perception of vowels by hearing-impaired listeners can be influenced substantially by reverberation. Errors for vowels in reverberation seemed to be related to the overestimation of vowel duration and to a tendency to perceive the pitch of the formant frequencies as being higher than in vowels without reverberation. Error patterns were somewhat individualized among subjects.


2013 ◽  
Author(s):  
Masako Fujimoto ◽  
Tatsuya Kitamura ◽  
Hiroaki Hatano ◽  
Ichiro Fujimoto

1983 ◽  
Vol 74 (S1) ◽  
pp. S39-S39
Author(s):  
Patricia A. Flynn ◽  
Jeffrey L. Danhauer ◽  
Dennis J. Arnst ◽  
Monica C. Goller ◽  
Sanford E. Gerber

1979 ◽  
Vol 31 (1) ◽  
pp. 82-88 ◽  
Author(s):  
Thomas Murry ◽  
William S. Brown, jr.

1992 ◽  
Vol 35 (4) ◽  
pp. 942-949 ◽  
Author(s):  
Christopher W. Turner ◽  
David A. Fabry ◽  
Stephanie Barrett ◽  
Amy R. Horwitz

This study examined the possibility that hearing-impaired listeners, in addition to displaying poorer-than-normal recognition of speech presented in background noise, require a larger signal-to-noise ratio for the detection of the speech sounds. Psychometric functions for the detection and recognition of stop consonants were obtained from both normal-hearing and hearing-impaired listeners. Expressing the speech levels in terms of their short-term spectra, the detection of consonants for both subject groups occurred at the same signal-to-noise ratio. In contrast, the hearing-impaired listeners displayed poorer recognition performance than the normal-hearing listeners. These results imply that the higher signal-to-noise ratios required for a given level of recognition by some subjects with hearing loss are not due in part to a deficit in detection of the signals in the masking noise, but rather are due exclusively to a deficit in recognition.


Sign in / Sign up

Export Citation Format

Share Document