scholarly journals Novel Radar-based Gesture Recognition System using Optimized CNN-LSTM Deep Neural Network for Low-power Microcomputer Platform

Author(s):  
Mateusz Chmurski ◽  
Mariusz Zubert
2021 ◽  
Vol 102 ◽  
pp. 04009
Author(s):  
Naoto Ageishi ◽  
Fukuchi Tomohide ◽  
Abderazek Ben Abdallah

Hand gestures are a kind of nonverbal communication in which visible bodily actions are used to communicate important messages. Recently, hand gesture recognition has received significant attention from the research community for various applications, including advanced driver assistance systems, prosthetic, and robotic control. Therefore, accurate and fast classification of hand gesture is required. In this research, we created a deep neural network as the first step to develop a real-time camera-only hand gesture recognition system without electroencephalogram (EEG) signals. We present the system software architecture in a fair amount of details. The proposed system was able to recognize hand signs with an accuracy of 97.31%.


2020 ◽  
Vol 11 (1) ◽  
pp. 10
Author(s):  
Muchun Su ◽  
Diana Wahyu Hayati ◽  
Shaowu Tseng ◽  
Jiehhaur Chen ◽  
Hsihsien Wei

Health care for independently living elders is more important than ever. Automatic recognition of their Activities of Daily Living (ADL) is the first step to solving the health care issues faced by seniors in an efficient way. The paper describes a Deep Neural Network (DNN)-based recognition system aimed at facilitating smart care, which combines ADL recognition, image/video processing, movement calculation, and DNN. An algorithm is developed for processing skeletal data, filtering noise, and pattern recognition for identification of the 10 most common ADL including standing, bending, squatting, sitting, eating, hand holding, hand raising, sitting plus drinking, standing plus drinking, and falling. The evaluation results show that this DNN-based system is suitable method for dealing with ADL recognition with an accuracy rate of over 95%. The findings support the feasibility of this system that is efficient enough for both practical and academic applications.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2540
Author(s):  
Zhipeng Yu ◽  
Jianghai Zhao ◽  
Yucheng Wang ◽  
Linglong He ◽  
Shaonan Wang

In recent years, surface electromyography (sEMG)-based human–computer interaction has been developed to improve the quality of life for people. Gesture recognition based on the instantaneous values of sEMG has the advantages of accurate prediction and low latency. However, the low generalization ability of the hand gesture recognition method limits its application to new subjects and new hand gestures, and brings a heavy training burden. For this reason, based on a convolutional neural network, a transfer learning (TL) strategy for instantaneous gesture recognition is proposed to improve the generalization performance of the target network. CapgMyo and NinaPro DB1 are used to evaluate the validity of our proposed strategy. Compared with the non-transfer learning (non-TL) strategy, our proposed strategy improves the average accuracy of new subject and new gesture recognition by 18.7% and 8.74%, respectively, when up to three repeated gestures are employed. The TL strategy reduces the training time by a factor of three. Experiments verify the transferability of spatial features and the validity of the proposed strategy in improving the recognition accuracy of new subjects and new gestures, and reducing the training burden. The proposed TL strategy provides an effective way of improving the generalization ability of the gesture recognition system.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Saad Albawi ◽  
Oguz Bayat ◽  
Saad Al-Azawi ◽  
Osman N. Ucan

Recently, social touch gesture recognition has been considered an important topic for touch modality, which can lead to highly efficient and realistic human-robot interaction. In this paper, a deep convolutional neural network is selected to implement a social touch recognition system for raw input samples (sensor data) only. The touch gesture recognition is performed using a dataset previously measured with numerous subjects that perform varying social gestures. This dataset is dubbed as the corpus of social touch, where touch was performed on a mannequin arm. A leave-one-subject-out cross-validation method is used to evaluate system performance. The proposed method can recognize gestures in nearly real time after acquiring a minimum number of frames (the average range of frame length was from 0.2% to 4.19% from the original frame lengths) with a classification accuracy of 63.7%. The achieved classification accuracy is competitive in terms of the performance of existing algorithms. Furthermore, the proposed system outperforms other classification algorithms in terms of classification ratio and touch recognition time without data preprocessing for the same dataset.


Sign in / Sign up

Export Citation Format

Share Document