articulatory gesture
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 5)

H-INDEX

3
(FIVE YEARS 0)

2021 ◽  
Vol 16 (1) ◽  
pp. 165-198
Author(s):  
U. Marie Engemann ◽  
Ingo Plag

Abstract Recent work on the acoustic properties of complex words has found that morphological information may influence the phonetic properties of words, e.g. acoustic duration. Paradigm uniformity has been proposed as one mechanism that may cause such effects. In a recent experimental study Seyfarth et al. (2017) found that the stems of English inflected words (e.g. frees) have a longer duration than the same string of segments in a homophonous mono-morphemic word (e.g. freeze), due to the co-activation of the longer articulatory gesture of the bare stem (e.g. free). However, not all effects predicted by paradigm uniformity were found in that study, and the role of frequency-related phonetic reduction remained inconclusive. The present paper tries to replicate the effect using conversational speech data from a different variety of English (i.e. New Zealand English), using the QuakeBox Corpus (Walsh et al. 2013). In the presence of word-form frequency as a predictor, stems of plurals were not found to be significantly longer than the corresponding strings of comparable non-complex words. The analysis revealed, however, a frequency-induced gradient paradigm uniformity effect: plural stems become shorter with increasing frequency of the bare stem.


2021 ◽  
Author(s):  
Fabian Tomaschek ◽  
Michael Ramscar ◽  
Samuel Thiele ◽  
Barbara Kaup ◽  
R. H. Baayen

It has been shown that a movement in a direction incongruent with the spatial semantics of words typically requires more time than movements that are directionally congruent. Two explanations have been proposed for this effect. Either a word's meaning is understood by using an internal model to simulate a word's meaning -- and incogruent directionality needs time to be resolved. Or words simply serve to reduce hearers' uncertainty about future states of the world, facilitating actions that prepare for them. However, since previous experiments have focused on actions that are directly involved in the exploration of space, they provide evidence for both hypotheses. Experiment 1 of the present study avoids this shortcoming. We investigated the basic downwards directed articulatory gesture producing a high-frequency German word, "ja" (`yes'), in response to reading words with vertical semantics. This task is thus completely unrelated to the semantics of the words. We show that tongue movements are systematically modulated by verticality ratings collected from the same speakers. To investigate the source of the effect, we performed two additional, linguistically unrelated experiments. Experiment 2 demonstrates anti-phasic coupling between tongue body movements and vertical arm and leg movements. Experiment 3 investigates tongue body movements prior to head movements and uncovers preparatory tongue raising to head raising in contrast to head lowering. Taken together, the results indicate that the changes in "ja" associated with vertical semantics most likely emerge from anticipating a head movement in the direction of the spatial target associated with the read word in order to optimize the body position for subsequent actions. Thus, the results support the assumption that words reduce the uncertainty about future states of the world.


2021 ◽  
Author(s):  
Fabian Tomaschek

It has been shown that a movement in a direction incongruent with the spatial semantics of words typically requires more time than movements that are directionally congruent. Two explanations have been proposed for this effect. Either a word's meaning is understood by using an internal model to simulate a word's meaning -- and incogruent directionality needs time to be resolved. Or words simply serve to reduce hearers' uncertainty about future states of the world, facilitating actions that prepare for them. However, since previous experiments have focused on actions that are directly involved in the exploration of space, they provide evidence for both hypotheses. Experiment 1 of the present study avoids this shortcoming. We investigated the basic downwards directed articulatory gesture producing a high-frequency German word, "ja" (yes), in response to reading words with vertical semantics. This task is thus completely unrelated to the semantics of the words. We show that tongue movements are systematically modulated by verticality ratings collected from the same speakers. To investigate the source of the effect, we performed two additional, linguistically unrelated experiments. Experiment 2 demonstrates anti-phasic coupling between tongue body movements and vertical arm and leg movements. Experiment 3 investigates tongue body movements prior to head movements and uncovers preparatory tongue raising to head raising in contrast to head lowering. Taken together, the results indicate that the changes in "ja" associated with vertical semantics most likely emerge from anticipating a head movement in the direction of the spatial target associated with the read word in order to optimize the body position for subsequent actions. Thus, the results support the assumption that words reduce the uncertainty about future states of the world.


2020 ◽  
Vol 1 (1) ◽  
pp. 23-42
Author(s):  
Daniel Recasens

Articulatory data are provided showing that, in languages in which they have phonemic status, (alveolo)palatal consonants, dark /l/ and the trill /r/ are articulated with a single lingual gesture instead of two independent tongue front and tongue body gestures. They are therefore simple, not complex segments. It is argued that tongue body lowering and retraction for dark /l/ and the trill /r/ is associated with manner of articulation requirements and with requirements on the implementation of the darkness percept in the case of dark /l/, and that tongue body raising and fronting for (alveolo)palatals results naturally from the contraction of the genioglossus muscle. These consonant units resemble truly complex palatalized and velarized or pharyngealized dentoalveolars regarding lingual configuration and kinematics, as well as coarticulatory efects and phonological and sound change processes. Contrary to some views, the study also contends that clear /l/ and the tap /?/ are not complex segments but consonants articulated with a more or less neutral tongue body configuration which is subject to considerable vowel coarticulation.


2016 ◽  
Vol 36 ◽  
pp. 330-346 ◽  
Author(s):  
Vikram Ramanarayanan ◽  
Maarten Van Segbroeck ◽  
Shrikanth S. Narayanan

2011 ◽  
Vol 4 (2) ◽  
Author(s):  
Benjamin Parrell

AbstractAn overview of Articulatory Phonology and a computational model based on this framework are presented. In Articulatory Phonology the basic unit is the articulatory gesture, which both characterizes an articulatory event and functions as the element of phonological contrast. Syllables and larger utterances are hypothesized to be combinations of gestures, which may overlap in time. Gestures are modeled as damped mass-spring systems, each associated with an abstract planning clock that triggers their activation. Timing between gestures is modeled as coupling relationships between these clocks. The Task Dynamics Application (TaDA) is a computational implementation of the hypotheses of Articulatory Phonology, currently available for both English and Spanish and expandable to other languages. Operation of the model is demonstrated. Past uses of the model include predicting reaction time, testing hypotheses generated from examination of experimental articulatory data, and text-to-speech systems. Information is included on how to obtain both TaDA and the Spanish gestural dictionary module.


2009 ◽  
Author(s):  
Prasanta Kumar Ghosh ◽  
Shrikanth S. Narayanan ◽  
Pierre Divenyi ◽  
Louis Goldstein ◽  
Elliot Saltzman

Sign in / Sign up

Export Citation Format

Share Document