The influence of vowel context on the Japanese listener’s identification of English voiceless fricatives

1999 ◽  
Vol 105 (2) ◽  
pp. 1093-1093
Author(s):  
William L. Martens ◽  
Stephen Lambacher
Keyword(s):  
1997 ◽  
Vol 40 (4) ◽  
pp. 877-893 ◽  
Author(s):  
Anders Löfqvist ◽  
Vincent L. Gracco

This paper reports two experiments, each designed to clarify different aspects of bilabial stop consonant production. The first one examined events during the labial closure using kinematic recordings in combination with records of oral air pressure and force of labial contact. The results of this experiment suggested that the lips were moving at a high velocity when the oral closure occurred. They also indicated mechanical interactions between the lips during the closure, including tissue compression and the lower lip moving the upper lip upward. The second experiment studied patterns of upper and lower lip interactions, movement variability within and across speakers, and the effects on lip and jaw kinematics of stop consonant voicing and vowel context. Again, the results showed that the lips were moving at a high velocity at the onset of the oral closure. No consistent influences of stop consonant voicing were observed on lip and jaw kinematics in five subjects, nor on a derived measure of lip aperture. The overall results are compatible with the hypothesis that one target for the lips in bilabial stop production is a region of negative lip aperture. A negative lip aperture implies that to reach their virtual target, the lips would have to move beyond each other. Such a control strategy would ensure that the lips will form an air tight seal irrespective of any contextual variability in the onset positions of their closing movements.


2004 ◽  
Vol 116 (4) ◽  
pp. 2571-2571 ◽  
Author(s):  
Sang‐hee Yeon ◽  
Ratree Wayland ◽  
James Harnsberger ◽  
Jenna Silver

2015 ◽  
Vol 137 (4) ◽  
pp. 2382-2382
Author(s):  
William F. Katz ◽  
Sonya Mehta ◽  
Amy Berglund

2018 ◽  
Vol 31 (1-2) ◽  
pp. 79-110 ◽  
Author(s):  
Denis Burnham ◽  
Barbara Dodd

Cross-language McGurk Effects are used to investigate the locus of auditory–visual speech integration. Experiment 1 uses the fact that [], as in ‘sing’, is phonotactically legal in word-final position in English and Thai, but in word-initial position only in Thai. English and Thai language participants were tested for ‘n’ perception from auditory [m]/visual [] (A[m]V[]) in word-initial and -final positions. Despite English speakers’ native language bias to label word-initial [] as ‘n’, the incidence of ‘n’ percepts to A[m]V[] was equivalent for English and Thai speakers in final and initial positions. Experiment 2 used the facts that (i) [ð] as in ‘that’ is not present in Japanese, and (ii) English speakers respond more often with ‘tha’ than ‘da’ to A[ba]V[ga], but more often with ‘di’ than ‘thi’ to A[bi]V[gi]. English and three groups of Japanese language participants (Beginner, Intermediate, Advanced English knowledge) were presented with A[ba]V[ga] and A[bi]V[gi] by an English (Experiment 2a) or a Japanese (Experiment 2b) speaker. Despite Japanese participants’ native language bias to perceive ‘d’ more often than ‘th’, the four groups showed a similar phonetic level effect of [a]/[i] vowel context × ‘th’ vs. ‘d’ responses to A[b]V[g] presentations. In Experiment 2b this phonetic level interaction held, but was more one-sided as very few ‘th’ responses were evident, even in Australian English participants. Results are discussed in terms of a phonetic plus postcategorical model, in which incoming auditory and visual information is integrated at a phonetic level, after which there are post-categorical phonemic influences.


1985 ◽  
Vol 28 (1) ◽  
pp. 87-95 ◽  
Author(s):  
Sandra Gordon-Salant

The purpose of this investigation was to determine whether normal-hearing and hearing-impaired listeners perceive phoneme features differently in noise and to determine whether phoneme perception changes as a fuction of signal-to-noise ratio (S/N). Consonant-vowel recognition by normal-hearing and hearing-impaired listeners was assessed in quiet and in three noise conditions. Analysis of total percent correct recognition scores revealed significant effects of hearing status, S/N, and vowel context. Patterns of phoneme errors were analyzed by INDSCAL, Derived consonant features that accounted for phoneme errors by both subject groups were similar to ones reported by other investigators. However, weightings associated with the individual features varied with changes in noise condition. Although hearing-impaired listeners exhibited poorer overall nonsense syllable recognition scores in noise than normal-hearing listeners, no specific set of features emerged from the multidimensional scaling procedures that could uniquely account for this performance deficit.


Sign in / Sign up

Export Citation Format

Share Document