scholarly journals Semantically related gestures move alike: Towards a distributional semantics of gesture kinematics

2021 ◽  
Author(s):  
Wim Pouw ◽  
Jan de Wit ◽  
Sara Bögels ◽  
Marlou Rasenberg ◽  
Branka Milivojevic ◽  
...  

Most manual communicative gestures that humans produce cannot be looked up in a dictionary, as these manual gestures inherit their meaning in large part from the communicative context and are not conventionalized. However, it is understudied to what extent the communicative signal as such — bodily postures in movement, or kinematics — can inform about gesture semantics. Can we construct, in principle, a distribution-based semantics of gesture kinematics, similar to how word vectorization methods in NLP (Natural language Processing) are now widely used to study semantic properties in text and speech? For such a project to get off the ground, we need to know the extent to which semantically similar gestures are more likely to be kinematically similar. In study 1 we assess whether semantic word2vec distances between the conveyed concepts participants were explicitly instructed to convey in silent gestures, relate to the kinematic distances of these gestures as obtained from Dynamic Time Warping (DTW). In a second director-matcher dyadic study we assess kinematic similarity between spontaneous co-speech gestures produced between interacting participants. Participants were asked before and after they interacted how they would name the objects. The semantic distances between the resulting names were related to the gesture kinematic distances of gestures that were made in the context of conveying those objects in the interaction. We find that the gestures’ semantic relatedness is reliably predictive of kinematic relatedness across these highly divergent studies, which suggests that the development of an NLP method of deriving semantic relatedness from kinematics is a promising avenue for future developments in automated multimodal recognition. Deeper implications for statistical learning processes in multimodal language are discussed.

2019 ◽  
Vol 9 (5) ◽  
pp. 908
Author(s):  
Chiu-Ching Tuan ◽  
Chi-Heng Lu ◽  
Yi-Chao Wu ◽  
Mei-Chuan Chen ◽  
Sung-Wei Chi ◽  
...  

In this paper, we introduce a simple sound signal diagnostic method to evaluate anterior cruciate ligament (ACL) injury before and after reconstructive surgery. Sixty-five recruited participants were divided into control (n = 27) and experimental (n = 38) groups. Dynamic time warping of sound signals was applied to evaluate the healthy and injured limbs before surgery with those after surgery via analysis of variance and Z-test analysis. In the control group, the average differences among three sensing points ranged from 7.7 ± 3.4 to 18.9 ± 10.6, over the frequency range of 250 Hz to 4 kHz. In the experimental group, the average differences were between 6.2 ± 2.8 and 57.4 ± 21.3. The greatest significant wavelet coefficient difference was observed in the range of 125–250 Hz. Our preliminary results demonstrate that the proposed approach produces significant signal variations at the ACL test point (TPACL) with regard to identifying ACL injury, with swing cycles conducted within the 4-kHz band of 1–7 wavelet coefficients. Thus, wavelet analysis of knee sound can be used to evaluate the recovery status after single ACL reconstruction surgery. After a 1-year follow-up of the 38 patients with ACL injury, the frequency band difference among sensing points was reduced from 274% to approximately 600%.


2021 ◽  
Author(s):  
Xiaowei Zhao ◽  
Shangxu Wang ◽  
Sanyi Yuan ◽  
Liang Cheng ◽  
Youjun Cai

Author(s):  
B Birch ◽  
CA Griffiths ◽  
A Morgan

Collaborative robots are becoming increasingly important for advanced manufacturing processes. The purpose of this paper is to determine the capability of a novel Human-Robot-interface to be used for machine hole drilling. Using a developed voice activation system, environmental factors on speech recognition accuracy are considered. The research investigates the accuracy of a Mel Frequency Cepstral Coefficients-based feature extraction algorithm which uses Dynamic Time Warping to compare an utterance to a limited, user-dependent dictionary. The developed Speech Recognition method allows for Human-Robot-Interaction using a novel integration method between the voice recognition and robot. The system can be utilised in many manufacturing environments where robot motions can be coupled to voice inputs rather than using time consuming physical interfaces. However, there are limitations to uptake in industries where the volume of background machine noise is high.


Sign in / Sign up

Export Citation Format

Share Document