scholarly journals Method for User Interface of Large Displays Using Arm Pointing and Finger Counting Gesture Recognition

2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Hansol Kim ◽  
Yoonkyung Kim ◽  
Eui Chul Lee

Although many three-dimensional pointing gesture recognition methods have been proposed, the problem of self-occlusion has not been considered. Furthermore, because almost all pointing gesture recognition methods use a wide-angle camera, additional sensors or cameras are required to concurrently perform finger gesture recognition. In this paper, we propose a method for performing both pointing gesture and finger gesture recognition for large display environments, using a single Kinect device and a skeleton tracking model. By considering self-occlusion, a compensation technique can be performed on the user’s detected shoulder position when a hand occludes the shoulder. In addition, we propose a technique to facilitate finger counting gesture recognition, based on the depth image of the hand position. In this technique, the depth image is extracted from the end of the pointing vector. By using exception handling for self-occlusions, experimental results indicate that the pointing accuracy of a specific reference position was significantly improved. The average root mean square error was approximately 13 pixels for a 1920 × 1080 pixels screen resolution. Moreover, the finger counting gesture recognition accuracy was 98.3%.

2014 ◽  
Vol 30 (1) ◽  
pp. 1-11 ◽  
Author(s):  
Alison C. McDonald ◽  
Elora C. Brenneman ◽  
Alan C. Cudlip ◽  
Clark R. Dickerson

As the modern workplace is dominated by submaximal repetitive tasks, knowledge of the effect of task location is important to ensure workers are unexposed to potentially injurious demands imposed by repetitive work in awkward or sustained postures. The purpose of this investigation was to develop a three-dimensional spatial map of the muscle activity for the right upper extremity during laterally directed submaximal force exertions. Electromyographic (EMG) activity was recorded from fourteen muscles surrounding the shoulder complex as the participants exerted 40N of force in two directions (leftward, rightward) at 70 defined locations. Hand position in both push directions strongly influenced total and certain individual muscle demands as identified by repeated measures analysis of variance (P< .001). During rightward exertions individual muscle activation varied from 1 to 21% MVE and during leftward exertions it varied from 1 to 27% MVE with hand location. Continuous prediction equations for muscular demands based on three-dimensional spatial parameters were created with explained variance ranging from 25 to 73%. The study provides novel information for evaluating existing and proactive workplace designs, and may help identify preferred geometric placements of lateral exertions in occupational settings to lower muscular demands, potentially mitigating fatigue and associated musculoskeletal risks.


Author(s):  
Ankit Chaudhary ◽  
Jagdish L. Raheja ◽  
Karen Das ◽  
Shekhar Raheja

In the last few years gesture recognition and gesture-based human computer interaction has gained a significant amount of popularity amongst researchers all over the world. It has a number of applications ranging from security to entertainment. Gesture recognition is a form of biometric identification that relies on the data acquired from the gesture depicted by an individual. This data, which can be either two-dimensional or three-dimensional, is compared against a database of individuals or is compared with respective thresholds based on the way of solving the riddle. In this paper, a novel method for angle calculation of both hands’ bended fingers is discussed and its application to a robotic hand control is presented. For the first time, such a study has been conducted in the area of natural computing for calculating angles without using any wired equipment, colors, marker or any device. The system deploys a simple camera and captures images. The pre-processing and segmentation of the region of interest is performed in a HSV color space and a binary format respectively. The technique presented in this paper requires no training for the user to perform the task.


2014 ◽  
Vol 41 (6) ◽  
pp. 0609003
Author(s):  
尹章芹 Yin Zhangqin ◽  
顾国华 Gu Guohua ◽  
陈钱 Chen Qian ◽  
钱惟贤 Qian Weixian

2007 ◽  
Vol 98 (6) ◽  
pp. 3614-3626 ◽  
Author(s):  
Claude Ghez ◽  
Robert Scheidt ◽  
Hank Heijink

We previously reported that the kinematics of reaching movements reflect the superimposition of two separate control mechanisms specifying the hand's spatial trajectory and its final equilibrium position. We now asked whether the brain maintains separate representations of the spatial goals for planning hand trajectory and final position. One group of subjects learned a 30° visuomotor rotation about the hand's starting point while performing a movement reversal task (“slicing”) in which they reversed direction at one target and terminated movement at another. This task required accuracy in acquiring a target mid-movement. A second group adapted while moving to—and stabilizing at—a single target (“reaching”). This task required accuracy in specifying an intended final position. We examined how learning in the two tasks generalized both to movements made from untrained initial positions and to movements directed toward untrained targets. Shifting initial hand position had differential effects on the location of reversals and final positions: Trajectory directions remained unchanged and reversal locations were displaced in slicing whereas final positions of both reaches and slices were relatively unchanged. Generalization across directions in slicing was consistent with a hand-centered representation of desired reversal point as demonstrated previously for this task whereas the distributions of final positions were consistent with an eye-centered representation as found previously in studies of pointing in three-dimensional space. Our findings indicate that the intended trajectory and final position are represented in different coordinate frames, reconciling previous conflicting claims of hand-centered (vectorial) and eye-centered representations in reach planning.


2013 ◽  
Vol 760-762 ◽  
pp. 1556-1561
Author(s):  
Ting Wei Du ◽  
Bo Liu

Indoor scene understanding based on the depth image data is a cutting-edge issue in the field of three-dimensional computer vision. Taking the layout characteristics of the indoor scenes and more plane features in these scenes into account, this paper presents a depth image segmentation method based on Gauss Mixture Model clustering. First, transform the Kinect depth image data into point cloud which is in the form of discrete three-dimensional point data, and denoise and down-sample the point cloud data; second, calculate the point normal of all points in the entire point cloud, then cluster the entire normal using Gaussian Mixture Model, and finally implement the entire point clouds segmentation by RANSAC algorithm. Experimental results show that the divided regions have obvious boundaries and segmentation quality is above normal, and lay a good foundation for object recognition.


2020 ◽  
Vol 17 (4) ◽  
pp. 172988142094237
Author(s):  
Yu He ◽  
Shengyong Chen

The developing time-of-flight (TOF) camera is an attractive device for the robot vision system to capture real-time three-dimensional (3D) images, but the sensor suffers from the limit of low resolution and precision of images. This article proposes an approach to automatic generation of an imaging model in the 3D space for error correction. Through observation data, an initial coarse model of the depth image can be obtained for each TOF camera. Then, its accuracy is improved by an optimization method. Experiments are carried out using three TOF cameras. Results show that the accuracy is dramatically improved by the spatial correction model.


Sign in / Sign up

Export Citation Format

Share Document