Gesture recognition and information recommendation based on machine learning and virtual reality in distance education

2020 ◽  
pp. 1-11
Author(s):  
Wan Juan

The dynamic and static gesture recognition in the distance education application scenario is not mature enough in theory at present, and still has a large space for development, and the application of gesture recognition in education is relatively insufficient. The purpose of this article is to combine gesture recognition with teacher classroom education and introduce a dynamic gesture recognition method. Moreover, this study introduces the data collection and preprocessing in detail and converts the data of the gesture action area into gray value images, and then uses the improved algorithm to perform classification. In addition, this study designs a control experiment to analyze the performance of the algorithm in this study and compares the accuracy of algorithm recognition from the perspective of simple background and complex background. The research results show that teaching gesture recognition in distance education can effectively improve education efficiency, with high accuracy, and can be directly applied to the system.

Author(s):  
Dhanashree Shyam Bendarkar ◽  
Pratiksha Appasaheb Somase ◽  
Preety Kalyansingh Rebari ◽  
Renuka Ramkrishna Paturkar ◽  
Arjumand Masood Khan

Individuals with hearing hindrance utilize gesture based communication to exchange their thoughts. Generally hand movements are used by them to communicate among themselves. But there are certain limitations when they communicate with other people who cannot understand these hand movements. There is a need to have a mechanism that can act as a translator between these people to communicate. It would be easier for these people to interact if there exists direct infrastructure that is able to convert signs to text and voice messages. As of late, numerous such frameworks for gesture based communication acknowledgment have been developed. But most of them are made either for static gesture recognition or dynamic gesture recognition. As sentences are generated using combinations of static and dynamic gestures, it would be simpler for hearing debilitated individuals if such computerized frameworks can detect both the static and dynamic motions together. We have proposed a design and architecture of American Sign Language (ASL) recognition with convolutional neural networks (CNN). This paper utilizes a pretrained VGG-16 architecture for static gesture recognition and for dynamic gesture recognition, spatiotemporal features were learnt with the complex architecture, called deep learning. It contains a bidirectional convolutional Long Short Term Memory network (ConvLSTM) and 3D convolutional neural network (3DCNN) and this architecture is responsible to extract  2D spatio temporal features.


2013 ◽  
Vol 380-384 ◽  
pp. 3874-3877 ◽  
Author(s):  
Duan Hong ◽  
Yang Luo

This paper puts forward a gestures trajectory recognition method based on DTW. Through the direction characteristic to calculate trajectory characteristics, and through the coding to quantize direction characteristic, and at the same time, considering the direction of the cyclical, proposed a new distance equation for calculating distance. The experimental results prove that the method realized the dynamic gesture recognition in the complex static background.


Author(s):  
Monika K J

Deaf and hard hearing people use linguistic communication to exchange information between their own community and with others. Sign gesture acquisition and text/speech generation are parts of computer recognition of linguistic communication. Static and dynamic are classified as sign gestures. Both recognition systems are important to the human community but static gesture recognition is less complicated than dynamic gesture recognition. Inability to talk is taken into account to be a disability among people. To speak with others people with disability use different modes, there are number of methods available for his or her communication one such common method of communication is linguistic communication. Development of linguistic communication recognition application for deaf people is vital, as they’ll be able to communicate easily with even people who don’t understand language. Our project aims at taking the fundamental step in removing the communication gap between normal people, deaf and dumb people using language.


2013 ◽  
Vol 380-384 ◽  
pp. 3738-3741
Author(s):  
Hong Duan ◽  
Yang Luo

Proposed a static gesture recognition method for identifying characteristics of the object in combination. With the feature vector composed of five features, such as the number of fingers, gesture outline convex defect characteristics, the length and area of contour and Hu matrix, we adopted the template-matching method to conduct the matching of featured parameters. Experiments show that the method successfully recognized static gestures under complex background and could reduce the impact of environmental change simultaneously.


Sign in / Sign up

Export Citation Format

Share Document