scholarly journals User Needs of Three Dimensional Hand Gesture Interfaces in Residential Environment Based on Diary Method

2015 ◽  
Vol 41 (5) ◽  
pp. 461-469
Author(s):  
Dong Yeong Jeong ◽  
Heejin Kim ◽  
Sung H. Han ◽  
Donghun Lee
2021 ◽  
Author(s):  
Yu Wai Chau

In order to investigate gestural behavior during human-computer interactions, an investigation into the designs of current interaction methods is conducted. This information is then compared to current emerging databases to observe if the gesture designs follow guidelines discovered in the above investigation. The comparison will also observe common trends in the currently developed gesture databases such as similar gesture for specific commands. In order to investigate gestural behavior during interactions with computer interfaces, an experiment has been devised to observe and record gestures in use for gesture databases through the use of a hardware sensor device. It was discovered that factors such as opposing adjacent fingers and gestures that simulated object manipulation are factors in user comfort. The results of this study will create guidelines for creating new gestures for hand gesture interfaces.


Author(s):  
Marilyn C. Salzman ◽  
Chris Dede ◽  
R. Bowen Loftin ◽  
Debra Sprague

Understanding how to leverage the features of immersive, three-dimensional (3-D) multisensory virtual reality to meet user needs presents a challenge for human factors researchers. This paper describes our approach to evaluating this medium's potential as a tool for teaching abstract science. It describes some of our early research outcomes and discusses an evaluation comparing a 3-D VR microworld to an alternative 2-D computer-based microworld. Both are simulations in which students learn about electrostatics. The outcomes of the comparison study suggest: 1) the immersive 3-D VR microworld facilitated conceptual and three-dimensional learning that the 2-D computer microworld did not, and 2) VR's multisensory information aided students who found the electrostatics concepts challenging. As a whole, our research suggests that VR's immersive representational abilities have promise for teaching and for visualization. It also demonstrates that characteristics of the learning experience such as usability, motivation, and simulator sickness are important part of assessing this medium's potential.


2020 ◽  
Vol 2 (2) ◽  
pp. 153-161
Author(s):  
Egemen Ertugrul ◽  
Ping Li ◽  
Bin Sheng

Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3226
Author(s):  
Radu Mirsu ◽  
Georgiana Simion ◽  
Catalin Daniel Caleanu ◽  
Ioana Monica Pop-Calimanu

Gesture recognition is an intensively researched area for several reasons. One of the most important reasons is because of this technology’s numerous application in various domains (e.g., robotics, games, medicine, automotive, etc.) Additionally, the introduction of three-dimensional (3D) image acquisition techniques (e.g., stereovision, projected-light, time-of-flight, etc.) overcomes the limitations of traditional two-dimensional (2D) approaches. Combined with the larger availability of 3D sensors (e.g., Microsoft Kinect, Intel RealSense, photonic mixer device (PMD), CamCube, etc.), recent interest in this domain has sparked. Moreover, in many computer vision tasks, the traditional statistic top approaches were outperformed by deep neural network-based solutions. In view of these considerations, we proposed a deep neural network solution by employing PointNet architecture for the problem of hand gesture recognition using depth data produced by a time of flight (ToF) sensor. We created a custom hand gesture dataset, then proposed a multistage hand segmentation by designing filtering, clustering, and finding the hand in the volume of interest and hand-forearm segmentation. For comparison purpose, two equivalent datasets were tested: a 3D point cloud dataset and a 2D image dataset, both obtained from the same stream. Besides the advantages of the 3D technology, the accuracy of the 3D method using PointNet is proven to outperform the 2D method in all circumstances, even the 2D method that employs a deep neural network.


Author(s):  
Chen Xiang ◽  
V. Lantz ◽  
Wang Kong-Qiao ◽  
Zhao Zhang-Yan ◽  
Zhang Xu ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 564 ◽  
Author(s):  
Shahzad Ahmed ◽  
Sung Ho Cho

The emerging integration of technology in daily lives has increased the need for more convenient methods for human–computer interaction (HCI). Given that the existing HCI approaches exhibit various limitations, hand gesture recognition-based HCI may serve as a more natural mode of man–machine interaction in many situations. Inspired by an inception module-based deep-learning network (GoogLeNet), this paper presents a novel hand gesture recognition technique for impulse-radio ultra-wideband (IR-UWB) radars which demonstrates a higher gesture recognition accuracy. First, methodology to demonstrate radar signals as three-dimensional image patterns is presented and then, the inception module-based variant of GoogLeNet is used to analyze the pattern within the images for the recognition of different hand gestures. The proposed framework is exploited for eight different hand gestures with a promising classification accuracy of 95%. To verify the robustness of the proposed algorithm, multiple human subjects were involved in data acquisition.


Sign in / Sign up

Export Citation Format

Share Document