gesture design
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 12)

H-INDEX

4
(FIVE YEARS 1)

Author(s):  
Roberto Bufano ◽  
Gennaro Costagliola ◽  
Mattia De Rosa ◽  
Vittorio Fuccella

2021 ◽  
Author(s):  
Amanda Royka ◽  
Marieke Schouwstra ◽  
Simon Kirby ◽  
Julian Jara-Ettinger

For a gesture to be successful, observers must recognize its communicative purpose. Are communicators sensitive to this problem and do they try to ease their observer’s inferential burden? We propose that people shape their gestures to help observers easily infer that their movements are meant to communicate. Using computational models of recursive goal inference, we show that this hypothesis predicts that gestures ought to reveal that the movement is inconsistent with the space of non-communicative goals in the environment. In two gesture-design experiments, we find that people spontaneously shape communicative movements in response to the distribution of potential instrumental goals, ensuring that the movement can be easily differentiated from instrumental action. Our results show that people are sensitive to the inferential demands that observers face. As a result, people actively work to help ensure that the goal of their communicative movement is understood.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3735
Author(s):  
Lesong Jia ◽  
Xiaozhou Zhou ◽  
Hao Qin ◽  
Ruidong Bai ◽  
Liuqing Wang ◽  
...  

Continuous movements of the hand contain discrete expressions of meaning, forming a variety of semantic gestures. For example, it is generally considered that the bending of the finger includes three semantic states of bending, half bending, and straightening. However, there is still no research on the number of semantic states that can be conveyed by each movement primitive of the hand, especially the interval of each semantic state and the representative movement angle. To clarify these issues, we conducted experiments of perception and expression. Experiments 1 and 2 focused on perceivable semantic levels and boundaries of different motion primitive units from the perspective of visual semantic perception. Experiment 3 verified and optimized the segmentation results obtained above and further determined the typical motion values of each semantic state. Furthermore, in Experiment 4, the empirical application of the above semantic state segmentation was illustrated by using Leap Motion as an example. We ended up with the discrete gesture semantic expression space both in the real world and Leap Motion Digital World, containing the clearly defined number of semantic states of each hand motion primitive unit and boundaries and typical motion angle values of each state. Construction of this quantitative semantic expression will play a role in guiding and advancing research in the fields of gesture coding, gesture recognition, and gesture design.


2020 ◽  
Vol 143 ◽  
pp. 102502
Author(s):  
Huiyue Wu ◽  
Jinxuan Gai ◽  
Yu Wang ◽  
Jiayi Liu ◽  
Jiali Qiu ◽  
...  

Author(s):  
Huiyue Wu ◽  
Shengqian Fu ◽  
Liuqingqing Yang ◽  
Xiaolong (Luke) Zhang

Author(s):  
Haodong Chen ◽  
Wenjin Tao ◽  
Ming C. Leu ◽  
Zhaozheng Yin

Abstract Human-robot collaboration (HRC) is a challenging task in modern industry and gesture communication in HRC has attracted much interest. This paper proposes and demonstrates a dynamic gesture recognition system based on Motion History Image (MHI) and Convolutional Neural Networks (CNN). Firstly, ten dynamic gestures are designed for a human worker to communicate with an industrial robot. Secondly, the MHI method is adopted to extract the gesture features from video clips and generate static images of dynamic gestures as inputs to CNN. Finally, a CNN model is constructed for gesture recognition. The experimental results show very promising classification accuracy using this method.


Sign in / Sign up

Export Citation Format

Share Document