gesture input
Recently Published Documents


TOTAL DOCUMENTS

70
(FIVE YEARS 18)

H-INDEX

9
(FIVE YEARS 2)

2021 ◽  
Author(s):  
◽  
Yi-jing Chung

<p>Geometric transformation gestures such as rotation, scaling and dragging are extremely common. There are multiple variants of designing and implementing these gestures. Variants include slightly modifying the gesture input (e.g. different original placement or tracing of fingers) or the resulting action (e.g. scale factor, retention of scale centre or rotation degree). There has not been a significant amount of research assessing the best design of geometric transformation gestures across multiple multi-touch devices. We describe our research project that looks at variants of standard geometric transformation hand gestures. We hypothesise that these variants are superior to standard geometric transformation gestures (in terms of supporting more precise transformations and faster completion times) and are as easy to initiate and maintain as the standard gestures. We also discuss our experiences in implementing these variants and describe and present user experiments we have completed in order to test our hypotheses. The results show that only some of our variants are more precise and support faster transformation completion and that only some of these results are mirrored between devices. Furthermore, only some of our variants are as easy to initiate and maintain as the standard gestures.</p>


2021 ◽  
Author(s):  
◽  
Yi-jing Chung

<p>Geometric transformation gestures such as rotation, scaling and dragging are extremely common. There are multiple variants of designing and implementing these gestures. Variants include slightly modifying the gesture input (e.g. different original placement or tracing of fingers) or the resulting action (e.g. scale factor, retention of scale centre or rotation degree). There has not been a significant amount of research assessing the best design of geometric transformation gestures across multiple multi-touch devices. We describe our research project that looks at variants of standard geometric transformation hand gestures. We hypothesise that these variants are superior to standard geometric transformation gestures (in terms of supporting more precise transformations and faster completion times) and are as easy to initiate and maintain as the standard gestures. We also discuss our experiences in implementing these variants and describe and present user experiments we have completed in order to test our hypotheses. The results show that only some of our variants are more precise and support faster transformation completion and that only some of these results are mirrored between devices. Furthermore, only some of our variants are as easy to initiate and maintain as the standard gestures.</p>


2021 ◽  
Author(s):  
Laura-Bianca Bilius ◽  
Radu-Daniel Vatavu
Keyword(s):  
Web Tool ◽  

2021 ◽  
Vol 2 ◽  
Author(s):  
Sungchul Jung ◽  
Robert. W Lindeman

The concepts of “immersion” and “presence” have been considered as staple metrics for evaluating the quality of virtual reality experiences for more than five decades, even as the concepts themselves have evolved in terms of both technical and psychological aspects. To enhance the user’s experience, studies have investigated the impact of different visual, auditory, and haptic stimuli in various contexts to mainly explore the concepts of “plausibility illusion” and “place illusion”. Previous research has sometimes shown a positive correlation between increased realism and an increase in presence, but not always, and thus, very little of the work around the topic of presence reports an unequivocal correlation. Indeed, one might classify the overall findings within the field around presence as “messy”. Better (or more) visual, auditory, or haptic cues, or increased agency, may lead to increased realism, but not necessarily increased presence, and may well depend on the application context. Rich visual and audio cues in concert contribute significantly to both realism and presence, but the addition of tactile cues, gesture input support, or a combination of these might improve realism, but not necessarily presence. In this paper, we review previous research and suggest a possible theory to better define the relationship between increases in sensory-based realism and presence, and thus help VR researchers create more effective experiences.


Author(s):  
Ebru Pınar ◽  
Sumeyra Ozturk ◽  
F. Nihan Ketrez ◽  
Şeyda Özçalışkan
Keyword(s):  

2020 ◽  
Vol 10 (21) ◽  
pp. 7898
Author(s):  
Akm Ashiquzzaman ◽  
Hyunmin Lee ◽  
Kwangki Kim ◽  
Hye-Young Kim ◽  
Jaehyung Park ◽  
...  

Current deep learning convolutional neural network (DCNN) -based hand gesture detectors with acute precision demand incredibly high-performance computing power. Although DCNN-based detectors are capable of accurate classification, the sheer computing power needed for this form of classification makes it very difficult to run with lower computational power in remote environments. Moreover, classical DCNN architectures have a fixed number of input dimensions, which forces preprocessing, thus making it impractical for real-world applications. In this research, a practical DCNN with an optimized architecture is proposed with DCNN filter/node pruning, and spatial pyramid pooling (SPP) is introduced in order to make the model input dimension-invariant. This compact SPP-DCNN module uses 65% fewer parameters than traditional classifiers and operates almost 3× faster than classical models. Moreover, the new improved proposed algorithm, which decodes gestures or sign language finger-spelling from videos, gave a benchmark highest accuracy with the fastest processing speed. This proposed method paves the way for various practical and applied hand gesture input-based human-computer interaction (HCI) applications.


2019 ◽  
Vol 9 (2) ◽  
Author(s):  
Mohamad Yahya Fekri Aladin ◽  
Ajune Wanis Ismail

Mixed Reality (MR) is the next evolution of humans interacting with computer as MR can combine the physical environment and digital environment and making them coexist with each other [1]. Interaction is still a valid research area in MR, and this paper focuses on interaction rather than other research areas such as tracking, calibration, and display [2] because the current interaction technique still not intuitive enough to let the user interact with the computer. This paper explores the user interaction using gesture and speech interaction for 3D object manipulation in mixed reality environment. The paper explains the design stage that involves interaction using gesture and speech inputs to enhance user experience in MR workspace. After acquiring gesture input and speech commands, MR prototype is proposed to integrate the interaction technique using gesture and speech.  The paper concludes with results and discussion.


Sign in / Sign up

Export Citation Format

Share Document