scholarly journals Commentary: “Vowel Quality and Direction of Stress Shift in a Predictive Model Explaining the Varying Impact of Misplaced Word Stress: Evidence From English” and “Exploring the Complexity of the L2 Intonation System: An Acoustic and Eye-Tracking Study”

2021 ◽  
Vol 6 ◽  
Author(s):  
Alison McGregor
2017 ◽  
Vol 21 (3) ◽  
pp. 640-652 ◽  
Author(s):  
ANNIE TREMBLAY ◽  
MIRJAM BROERSMA ◽  
CAITLIN E. COUGHLIN

This study newly investigates whether the functional weight of a prosodic cue in the native language predicts listeners’ learning and use of that cue in second-language speech segmentation. It compares English and Dutch listeners’ use of fundamental-frequency (F0) rise as a cue to word-final boundaries in French. F0 rise signals word-initial boundaries in English and Dutch, but has a weaker functional weight in English than Dutch because it is more strongly correlated with vowel quality in English than Dutch. English- and Dutch-speaking learners of French matched in French proficiency and experience, and native French listeners completed a visual-world eye-tracking experiment in French where they monitored words ending with/out an F0 rise (replication of Tremblay, Broersma, Coughlin & Choi, 2016). Dutch listeners made earlier/greater use of the F0 rise than English listeners, and in one condition they made greater use of F0 rise than French listeners, extending the cue-weighting theory to speech segmentation.


2021 ◽  
Vol 6 ◽  
Author(s):  
Monica Ghosh ◽  
John M. Levis

The use of suprasegmental cues to word stress occurs across many languages. Nevertheless, L1 English listeners' pay little attention to suprasegmental word stress cues and evidence shows that segmental cues are more important to L1 English listeners in how words are identified in speech. L1 English listeners assume strong syllables with full vowels mark the beginning of a new word, attempting alternative resegmentations only when this heuristic fails to identify a viable word string. English word stress errors have been shown to severely disrupt processing for both L1 and L2 listeners, but not all word stress errors are equally damaging. Vowel quality and direction of stress shift are thought to be predictors of the intelligibility of non-standard stress pronunciations—but most research so far on this topic has been limited to two-syllable words. The current study uses auditory lexical decision and delayed word identification tasks to test a hypothesized English Word Stress Error Gravity Hierarchy for words of two to five syllables. Results indicate that English word stress errors affect intelligibility most when they introduce concomitant vowel errors, an effect that is somewhat mediated by the direction of stress shift. As a consequence, the relative intelligibility impact of any particular lexical stress error can be predicted by the Hierarchy for both L1 and L2 English listeners. These findings have implications for L1 and L2 English pronunciation research and teaching. For research, our results demonstrate that varied findings about loss of intelligibility are connected to vowel quality changes of word stress errors and that these factors must be accounted for in intelligibility research. For teaching, the results indicate that not all word stress errors are equally important, and that only word stress errors that affect vowel quality should be prioritized.


2020 ◽  
Vol 63 (7) ◽  
pp. 2245-2254 ◽  
Author(s):  
Jianrong Wang ◽  
Yumeng Zhu ◽  
Yu Chen ◽  
Abdilbar Mamat ◽  
Mei Yu ◽  
...  

Purpose The primary purpose of this study was to explore the audiovisual speech perception strategies.80.23.47 adopted by normal-hearing and deaf people in processing familiar and unfamiliar languages. Our primary hypothesis was that they would adopt different perception strategies due to different sensory experiences at an early age, limitations of the physical device, and the developmental gap of language, and others. Method Thirty normal-hearing adults and 33 prelingually deaf adults participated in the study. They were asked to perform judgment and listening tasks while watching videos of a Uygur–Mandarin bilingual speaker in a familiar language (Standard Chinese) or an unfamiliar language (Modern Uygur) while their eye movements were recorded by eye-tracking technology. Results Task had a slight influence on the distribution of selective attention, whereas subject and language had significant influences. To be specific, the normal-hearing and the d10eaf participants mainly gazed at the speaker's eyes and mouth, respectively, in the experiment; moreover, while the normal-hearing participants had to stare longer at the speaker's mouth when they confronted with the unfamiliar language Modern Uygur, the deaf participant did not change their attention allocation pattern when perceiving the two languages. Conclusions Normal-hearing and deaf adults adopt different audiovisual speech perception strategies: Normal-hearing adults mainly look at the eyes, and deaf adults mainly look at the mouth. Additionally, language and task can also modulate the speech perception strategy.


1992 ◽  
Vol 35 (4) ◽  
pp. 892-902 ◽  
Author(s):  
Robert Allen Fox ◽  
Lida G. Wall ◽  
Jeanne Gokcen

This study examined age-related differences in the use of dynamic acoustic information (in the form of formant transitions) to identify vowel quality in CVCs. Two versions of 61 naturally produced, commonly occurring, monosyllabic English words were created: a control version (the unmodified whole word) and a silent-center version (in which approximately 62% of the medial vowel was replaced by silence). A group of normal-hearing young adults (19–25 years old) and older adults (61–75 years old) identified these tokens. The older subjects were found to be significantly worse than the younger subjects at identifying the medial vowel and the initial and final consonants in the silent-center condition. These results support the hypothesis of an age-related decrement in the ability to process dynamic perceptual cues in the perception of vowel quality.


Author(s):  
Pirita Pyykkönen ◽  
Juhani Järvikivi

A visual world eye-tracking study investigated the activation and persistence of implicit causality information in spoken language comprehension. We showed that people infer the implicit causality of verbs as soon as they encounter such verbs in discourse, as is predicted by proponents of the immediate focusing account ( Greene & McKoon, 1995 ; Koornneef & Van Berkum, 2006 ; Van Berkum, Koornneef, Otten, & Nieuwland, 2007 ). Interestingly, we observed activation of implicit causality information even before people encountered the causal conjunction. However, while implicit causality information was persistent as the discourse unfolded, it did not have a privileged role as a focusing cue immediately at the ambiguous pronoun when people were resolving its antecedent. Instead, our study indicated that implicit causality does not affect all referents to the same extent, rather it interacts with other cues in the discourse, especially when one of the referents is already prominently in focus.


Sign in / Sign up

Export Citation Format

Share Document