scholarly journals Recognition of American Sign Language Gestures in a Virtual Reality Using Leap Motion

2019 ◽  
Vol 9 (3) ◽  
pp. 445 ◽  
Author(s):  
Aurelijus Vaitkevičius ◽  
Mantas Taroza ◽  
Tomas Blažauskas ◽  
Robertas Damaševičius ◽  
Rytis Maskeliūnas ◽  
...  

We perform gesture recognition in a Virtual Reality (VR) environment using dataproduced by the Leap Motion device. Leap Motion generates a virtual three-dimensional (3D) handmodel by recognizing and tracking user‘s hands. From this model, the Leap Motion applicationprogramming interface (API) provides hand and finger locations in the 3D space. We present asystem that is capable of learning gestures by using the data from the Leap Motion device and theHidden Markov classification (HMC) algorithm. We have achieved the gesture recognitionaccuracy (mean ± SD) is 86.1 ± 8.2% and gesture typing speed is 3.09 ± 0.53 words per minute(WPM), when recognizing the gestures of the American Sign Language (ASL).


Author(s):  
Muhammad Saad Amin ◽  
Muhammad Talha Amin ◽  
Muhammad Yasir Latif ◽  
Ali Asghar Jathol ◽  
Nisar Ahmed ◽  
...  


2011 ◽  
Vol 14 (1) ◽  
pp. 179-199 ◽  
Author(s):  
Rosalee Wolfe ◽  
Peter Cook ◽  
John C. McDonald ◽  
Jerry Schnepp

Computer-generated three-dimensional animation holds great promise for synthesizing utterances in American Sign Language (ASL) that are not only grammatical, but well-tolerated by members of the Deaf community. Unfortunately, animation poses several challenges stemming from the necessity of grappling with massive amounts of data. However, the linguistics of ASL may aid in surmounting the challenge by providing structure and rules for organizing animation data. An exploration of the linguistic and extralinguistic behavior of the brows from an animator’s viewpoint yields a new approach for synthesizing nonmanuals that differs from the conventional animation of anatomy and instead offers a different approach for animating the effects of interacting levels of linguistic function. Results of formal testing with Deaf users have indicated that this is a promising approach.



Author(s):  
Dhanashree Shyam Bendarkar ◽  
Pratiksha Appasaheb Somase ◽  
Preety Kalyansingh Rebari ◽  
Renuka Ramkrishna Paturkar ◽  
Arjumand Masood Khan

Individuals with hearing hindrance utilize gesture based communication to exchange their thoughts. Generally hand movements are used by them to communicate among themselves. But there are certain limitations when they communicate with other people who cannot understand these hand movements. There is a need to have a mechanism that can act as a translator between these people to communicate. It would be easier for these people to interact if there exists direct infrastructure that is able to convert signs to text and voice messages. As of late, numerous such frameworks for gesture based communication acknowledgment have been developed. But most of them are made either for static gesture recognition or dynamic gesture recognition. As sentences are generated using combinations of static and dynamic gestures, it would be simpler for hearing debilitated individuals if such computerized frameworks can detect both the static and dynamic motions together. We have proposed a design and architecture of American Sign Language (ASL) recognition with convolutional neural networks (CNN). This paper utilizes a pretrained VGG-16 architecture for static gesture recognition and for dynamic gesture recognition, spatiotemporal features were learnt with the complex architecture, called deep learning. It contains a bidirectional convolutional Long Short Term Memory network (ConvLSTM) and 3D convolutional neural network (3DCNN) and this architecture is responsible to extract  2D spatio temporal features.



Author(s):  
Sarvesh Joglekar ◽  
Hrishikesh Sawant ◽  
Aayush Jain ◽  
Priya Dhadda ◽  
Pankaj Sonawane


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3554 ◽  
Author(s):  
Teak-Wei Chong ◽  
Boon-Giin Lee

Sign language is intentionally designed to allow deaf and dumb communities to convey messages and to connect with society. Unfortunately, learning and practicing sign language is not common among society; hence, this study developed a sign language recognition prototype using the Leap Motion Controller (LMC). Many existing studies have proposed methods for incomplete sign language recognition, whereas this study aimed for full American Sign Language (ASL) recognition, which consists of 26 letters and 10 digits. Most of the ASL letters are static (no movement), but certain ASL letters are dynamic (they require certain movements). Thus, this study also aimed to extract features from finger and hand motions to differentiate between the static and dynamic gestures. The experimental results revealed that the sign language recognition rates for the 26 letters using a support vector machine (SVM) and a deep neural network (DNN) are 80.30% and 93.81%, respectively. Meanwhile, the recognition rates for a combination of 26 letters and 10 digits are slightly lower, approximately 72.79% for the SVM and 88.79% for the DNN. As a result, the sign language recognition system has great potential for reducing the gap between deaf and dumb communities and others. The proposed prototype could also serve as an interpreter for the deaf and dumb in everyday life in service sectors, such as at the bank or post office.





Sign in / Sign up

Export Citation Format

Share Document