scholarly journals MagIK: a Hand Tracking Magnetic Positioning System based on a Kinematic Model of the Hand

Author(s):  
Francesco Santoni ◽  
Alessio De Angelis ◽  
Antonio Moschitta ◽  
Paolo Carbone

In this paper we present a hand tracking system based on magnetic positioning. A single magnetic node is mounted on each fingertip, and two magnetic nodes on the back side of the hand. A fixed array of receiving coils is used to detect the magnetic field, from which it is possible to infer position and orientation of each magnetic node. A kinematic model of the whole hand has been developed. Starting from the positioning data of each magnetic node, the kinematic model can be used to calculate position and flexion angle of each finger joint, plus the position and orientation of the hand in space. Relying on magnetic fields, the hand tracking system can work also in nonline-of-sight conditions. The gesture reconstruction is validated by comparing it with a commercial hand tracking system based on a depth camera. The system requires a small amount of electronics to be mounted on the hand. This would allow building a light and comfortable data glove that could be used for several purposes: human-machine interface, sign language recognition, diagnostics, and rehabilitation.

2021 ◽  
Author(s):  
Francesco Santoni ◽  
Alessio De Angelis ◽  
Antonio Moschitta ◽  
Paolo Carbone

In this paper we present a hand tracking system based on magnetic positioning. A single magnetic node is mounted on each fingertip, and two magnetic nodes on the back side of the hand. A fixed array of receiving coils is used to detect the magnetic field, from which it is possible to infer position and orientation of each magnetic node. A kinematic model of the whole hand has been developed. Starting from the positioning data of each magnetic node, the kinematic model can be used to calculate position and flexion angle of each finger joint, plus the position and orientation of the hand in space. Relying on magnetic fields, the hand tracking system can work also in nonline-of-sight conditions. The gesture reconstruction is validated by comparing it with a commercial hand tracking system based on a depth camera. The system requires a small amount of electronics to be mounted on the hand. This would allow building a light and comfortable data glove that could be used for several purposes: human-machine interface, sign language recognition, diagnostics, and rehabilitation.


Sign language is the only method of communication for the hearing and speech impaired people around the world. Most of the speech and hearing-impaired people understand single sign language. Thus, there is an increasing demand for sign language interpreters. For regular people learning sign language is difficult, and for speech and hearing-impaired person, learning spoken language is impossible. There is a lot of research being done in the domain of automatic sign language recognition. Different methods such as, computer vision, data glove, depth sensors can be used to train a computer to interpret sign language. The interpretation is being done from sign to text, text to sign, speech to sign and sign to speech. Different countries use different sign languages, the signers of different sign languages are unable to communicate with each other. Analyzing the characteristic features of gestures provides insights about the sign language, some common features in sign languages gestures will help in designing a sign language recognition system. This type of system will help in reducing the communication gap between sign language users and spoken language users.


2017 ◽  
Vol 7 (1.1) ◽  
pp. 539
Author(s):  
P Praveen Kumar ◽  
P V.G.D. Prasad Reddy ◽  
P Srinivasa Rao

Machine translation of sign language is a complex and challenging problem in computer vision research. In this work, we propose to handle issues such as hands tracking, feature representation and classification for efficient interpretation of sign language from isolated sign videos. Hands tracking is attempted in a sequential format with one hand after the other by nullifying the effects of head movement using serial particle filter. The estimated hand positions in the video sequence are used to extract the hand portions to create a feature covariance matrix. This matrix is a compact representation of the hand features representing a sign. Adaptability of the feature covariance matrix is explored in developing relationships with new signs without creating a new feature matrix for individual signs. The extracted features are then applied to a neural network classifier which is trained with error backpropagation algorithm. Multiple experiments were conducted on a 181 class signs with 50 sentence formations with 5 different signers. Experimental results show the proposed sequential hand tracking is closer to ground truth. The proposed covariance features resulted in a classification accuracy of 89.34% with the neural network classifier.


2017 ◽  
Vol 26 (3) ◽  
pp. 471-481 ◽  
Author(s):  
Ananya Choudhury ◽  
Anjan Kumar Talukdar ◽  
Manas Kamal Bhuyan ◽  
Kandarpa Kumar Sarma

AbstractAutomatic sign language recognition (SLR) is a current area of research as this is meant to serve as a substitute for sign language interpreters. In this paper, we present the design of a continuous SLR system that can extract out the meaningful signs and consequently recognize them. Here, we have used height of the hand trajectory as a salient feature for separating out the meaningful signs from the movement epenthesis patterns. Further, we have incorporated a unique set of spatial and temporal features for efficient recognition of the signs encapsulated within the continuous sequence. The implementation of an efficient hand segmentation and hand tracking technique makes our system robust to complex background as well as background with multiple signers. Experiments have established that our proposed system can identify signs from a continuous sign stream with a 92.8% spotting rate.


Sign in / Sign up

Export Citation Format

Share Document