articulatory movement
Recently Published Documents


TOTAL DOCUMENTS

67
(FIVE YEARS 10)

H-INDEX

11
(FIVE YEARS 1)

Author(s):  
Yu-Wen Chen ◽  
Kuo-Hsuan Hung ◽  
Shang-Yi Chuang ◽  
Jonathan Sherman ◽  
Xugang Lu ◽  
...  

Author(s):  
Kikuo Maekawa

ABSTRACT Japanese moraic nasal /N/ is a nasal segment having the status of an independent mora. In utterance-medial position, it is realized as a nasal segment sharing the same place of articulation as the immediately following segment, but in utterance-final position, it is believed to be realized as a uvular nasal. This final-/N/-as-uvular view, which is wide-spread in the literature on Japanese phonetics and phonology, was examined objectively by use of a real-time MRI movie of the articulatory movement of eleven Tokyo Japanese speakers. It turned out that the utterance-final /N/ is realized in a wide range of locations on the palate from the hard palate to the uvula. GLMM modeling showed that the closure locations of the utterance-final /N/ can be predicted accurately from the identity of the preceding vowel. In addition, leave-one-out cross validation showed that the model can be generalized to new data. We conclude that the realization of utterance-final /N/ is not fixed to uvular; its place of articulation is determined largely by the property of the preceding vowel.


2021 ◽  
Vol 11 ◽  
pp. 14-22
Author(s):  
Erika Ozawa ◽  
Ei-ichi Honda ◽  
Hiroshi Tomizato ◽  
Tohru Kurabayashi ◽  
Kulthida Nunthayanon ◽  
...  

Objectives: Previous studies have reported that articulatory dysfunction accompanied by a certain type of malocclusion can be improved by orthodontic treatment. We developed a 3-T magnetic resonance imaging (MRI) movie method with tooth visualization that can display the dynamic movement of articulation without radiation exposure. To the best of our knowledge, there is currently no report on the possible differences in articulatory movement between subjects with a normal occlusion and those with malocclusion using the 3T MRI movie method. Thus, the objective of this study was to examine the articulatory difference between subjects with a normal occlusion and those with an open bite using an MRI movie. Materials and Methods: Twenty healthy adult females, ten with a normal occlusion and ten with an anterior open bite were recruited. The overbite of the open bite subjects was zero or smaller, and all of them exhibited a tongue-thrusting habit during swallowing. A turbo spin echo image with a contrast medium was used to visualize the anterior teeth, and articulatory movement during articulation of the vowel-consonant-vowel syllable (/asa/) was scanned. The difference in tongue movement between subjects with a normal occlusion and those with an open bite was compared by measuring seven variables. Moreover, the distance between the incisal edge and the tongue apex during articulation of /s/ and the speech duration were compared. Furthermore, frequency analysis on /s/ by fast Fourier transform power spectrum was performed. Results: The tongue apex of the open bite subjects moved more anteriorly than that of the normal subjects. However, there was no significant difference in the phonetic analysis between subjects with a normal occlusion and those with an open bite. Conclusion: The 3-T MRI movie was an efficient method to quantify articulatory tongue movements. Although there was a difference in tongue movement during swallowing between subjects with a normal occlusion and those with an open bite, the difference in the articulatory tongue movements was minimal, suggesting it could be a functional compensation.


2020 ◽  
Author(s):  
Wim Pouw ◽  
Mark Dingemanse ◽  
Yasamin Motamedi ◽  
Asli

Reverse engineering how language emerged is a daunting interdisciplinary project. Experimental cognitive science has contributed to this effort by eliciting in the lab constraints likely playing a role for language emergence; constraints such as iterated transmission of communicative tokens between agents. Since such constraints played out over long phylogenetic time and involved vast populations, a crucial challenge for iterated language learning paradigms is to extend its limits. In the current approach we perform a multiscale quantification of kinematic changes of an evolving silent gesture system. Silent gestures consist of complex multi-articulatory movement that have so far proven elusive to quantify in a structural and reproducable way, and is primarily studied through human coders meticulously interpreting the referential content of gestures. Here we reanalyzed video data from a silent gesture iterated learning experiment (Motamedi et al. 2019), which originally showed increases in systematicity of gestural form over language transmissions. We applied a signal-based approach, first utilizing computer vision techniques to quantify kinematics from videodata. Then we performed a multiscale kinematic analysis showing that over generations of language users, silent gestures became more efficient and less complex in their kinematics. We further detect systematicity of the communicative tokens’s interrelations which proved itself as a proxy of systematicity obtained via human observation data. Thus we demonstrate the potential for a signal-based approach of language evolution in complex multi-articulatory gestures.


2020 ◽  
Author(s):  
Wim Pouw ◽  
Mark Dingemanse ◽  
Yasamin Motamedi ◽  
Asli

Reverse engineering how language emerged is a daunting interdisciplinary project. Experimental cognitive science has contributed to this effort by eliciting in the lab constraints likely playing a role for language emergence; constraints such as iterated transmission of communicative tokens between agents. Since such constraints played out over long phylogenetic time and involved vast populations, a crucial challenge for iterated language learning paradigms is to extend its limits. In the current approach we perform a multiscale quantification of kinematic changes of an evolving silent gesture system. Silent gestures consist of complex multi-articulatory movement that have so far proven elusive to quantify in a structural and reproducable way, and is primarily studied through human coders meticulously interpreting the referential content of gestures. Here we reanalyzed video data from a silent gesture iterated learning experiment (Motamedi et al. 2019), which originally showed increases in systematicity of gestural form over language transmissions. We applied a signal-based approach, first utilizing computer vision techniques to quantify kinematics from videodata. Then we performed a multiscale kinematic analysis showing that over generations of language users, silent gestures became more efficient and less complex in their kinematics. We further detect systematicity of the communicative tokens’s interrelations which proved itself as a proxy of systematicity obtained via human observation data. Thus we demonstrate the potential for a signal-based approach of language evolution in complex multi-articulatory gestures.


Speech Timing ◽  
2020 ◽  
pp. 8-48
Author(s):  
Alice Turk ◽  
Stefanie Shattuck-Hufnagel

This chapter summarizes the basic mechanisms of the Articulatory Phonology model, currently the most thoroughly worked-out model in the literature, with a focus on its system-intrinsic mechanisms used to account for systematic variation in speech timing. Key features of the model are reviewed, and oscillator-based mechanisms are described for timing control for articulatory gestures, control of inter-gestural coordination, prosodic timing control, and the control of overall speech rate. Strengths of the AP/TD approach are discussed, which include facts that are well-accounted-for within this model, such as the predominance of CV syllables within the world’s languages, as well as characteristics of processing within the model that are assumed to be advantageous, such as the avoidance of the need to explicitly plan the details of articulatory movement when planning an utterance. This presentation forms the basis of the evaluation presented in subsequent chapters.


Speech Timing ◽  
2020 ◽  
pp. 238-263
Author(s):  
Alice Turk ◽  
Stefanie Shattuck-Hufnagel

This chapter addresses the nature of the general-purpose timekeeping mechanisms that are assumed in phonology-extrinsic-timing models of speech production. The first part of the chapter discusses some current questions about the nature of these mechanisms. The second part of the chapter presents Lee’s General Tau theory (Lee 1998, 2009), a theory of the temporal guidance of action in voluntary movement. This theory provides a crucial component for our phonology-extrinsic-timing-based, three-component model of speech production because its tau-coupling mechanism provides a way to plan movements with appropriate velocity profiles, as well as endpoint-based movement coordination. In doing so, it provides a general-purpose, phonology-extrinsic alternative to AP/TD’s use of oscillators for the control of the time-course of articulatory movement and coordination.


Sign in / Sign up

Export Citation Format

Share Document