Reliance on Visible Speech Cues During Multimodal Language Processing: Individual and Age Differences

2007 ◽  
Vol 33 (4) ◽  
pp. 373-397 ◽  
Author(s):  
L. Thompson ◽  
E. Garcia ◽  
D. Malloy
2021 ◽  
Author(s):  
Wim Pouw ◽  
Jan de Wit ◽  
Sara Bögels ◽  
Marlou Rasenberg ◽  
Branka Milivojevic ◽  
...  

Most manual communicative gestures that humans produce cannot be looked up in a dictionary, as these manual gestures inherit their meaning in large part from the communicative context and are not conventionalized. However, it is understudied to what extent the communicative signal as such — bodily postures in movement, or kinematics — can inform about gesture semantics. Can we construct, in principle, a distribution-based semantics of gesture kinematics, similar to how word vectorization methods in NLP (Natural language Processing) are now widely used to study semantic properties in text and speech? For such a project to get off the ground, we need to know the extent to which semantically similar gestures are more likely to be kinematically similar. In study 1 we assess whether semantic word2vec distances between the conveyed concepts participants were explicitly instructed to convey in silent gestures, relate to the kinematic distances of these gestures as obtained from Dynamic Time Warping (DTW). In a second director-matcher dyadic study we assess kinematic similarity between spontaneous co-speech gestures produced between interacting participants. Participants were asked before and after they interacted how they would name the objects. The semantic distances between the resulting names were related to the gesture kinematic distances of gestures that were made in the context of conveying those objects in the interaction. We find that the gestures’ semantic relatedness is reliably predictive of kinematic relatedness across these highly divergent studies, which suggests that the development of an NLP method of deriving semantic relatedness from kinematics is a promising avenue for future developments in automated multimodal recognition. Deeper implications for statistical learning processes in multimodal language are discussed.


2019 ◽  
Vol 23 (8) ◽  
pp. 639-652 ◽  
Author(s):  
Judith Holler ◽  
Stephen C. Levinson

Sign in / Sign up

Export Citation Format

Share Document