A Novel 3D Editor for Gesture Design Based on Labanotation

Author(s):  
Kathleen Anderson ◽  
Börge Kordts ◽  
Andreas Schrader
Keyword(s):  
2020 ◽  
Vol 143 ◽  
pp. 102502
Author(s):  
Huiyue Wu ◽  
Jinxuan Gai ◽  
Yu Wang ◽  
Jiayi Liu ◽  
Jiali Qiu ◽  
...  

2001 ◽  
Author(s):  
A. Chris Long ◽  
James A. Landay ◽  
Lawrence A. Rowe
Keyword(s):  

Author(s):  
Beatriz López Mencía ◽  
David D. Pardo ◽  
Alvaro Hernández Trapote ◽  
Luis A. Hernández Gómez

One of the major challenges for dialogue systems deployed in commercial applications is to improve robustness when common low-level problems occur that are related with speech recognition. We first discuss this important family of interaction problems, and then we discuss the features of non-verbal, visual, communication that Embodied Conversational Agents (ECAs) bring ‘into the picture’ and which may be tapped into to improve spoken dialogue robustness and the general smoothness and efficiency of the interaction between the human and the machine. Our approach is centred around the information provided by ECAs. We deal with all stages of the conversation system development process, from scenario description, to gesture design and evaluation with comparative user tests. We conclude that ECAs can help improve the robustness of, as well as the users’ subjective experience with, a dialogue system. However, they may also make users more demanding and intensify privacy and security concerns.


Author(s):  
Nicholas Jackiw ◽  
Nathalie Sinclair
Keyword(s):  

Author(s):  
Roberto Bufano ◽  
Gennaro Costagliola ◽  
Mattia De Rosa ◽  
Vittorio Fuccella

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3735
Author(s):  
Lesong Jia ◽  
Xiaozhou Zhou ◽  
Hao Qin ◽  
Ruidong Bai ◽  
Liuqing Wang ◽  
...  

Continuous movements of the hand contain discrete expressions of meaning, forming a variety of semantic gestures. For example, it is generally considered that the bending of the finger includes three semantic states of bending, half bending, and straightening. However, there is still no research on the number of semantic states that can be conveyed by each movement primitive of the hand, especially the interval of each semantic state and the representative movement angle. To clarify these issues, we conducted experiments of perception and expression. Experiments 1 and 2 focused on perceivable semantic levels and boundaries of different motion primitive units from the perspective of visual semantic perception. Experiment 3 verified and optimized the segmentation results obtained above and further determined the typical motion values of each semantic state. Furthermore, in Experiment 4, the empirical application of the above semantic state segmentation was illustrated by using Leap Motion as an example. We ended up with the discrete gesture semantic expression space both in the real world and Leap Motion Digital World, containing the clearly defined number of semantic states of each hand motion primitive unit and boundaries and typical motion angle values of each state. Construction of this quantitative semantic expression will play a role in guiding and advancing research in the fields of gesture coding, gesture recognition, and gesture design.


Sign in / Sign up

Export Citation Format

Share Document