scholarly journals High Inclusiveness and Accuracy Motion Blur Real-Time Gesture Recognition Based on YOLOv4 Model Combined Attention Mechanism and DeblurGanv2

2021 ◽  
Vol 11 (21) ◽  
pp. 9982
Author(s):  
Hongchao Zhuang ◽  
Yilu Xia ◽  
Ning Wang ◽  
Lei Dong

The combination of gesture recognition and aerospace exploration robots can realize the efficient non-contact control of the robots. In the harsh aerospace environment, the captured gesture images are usually blurred and damaged inevitably. The motion blurred images not only cause part of the transmitted information to be lost, but also affect the effect of neural network training in the later stage. To improve the speed and accuracy of motion blurred gestures recognition, the algorithm of YOLOv4 (You Only Look Once, vision 4) is studied from the two aspects of motion blurred image processing and model optimization. The DeblurGanv2 is employed to remove the motion blur of the gestures in YOLOv4 network input pictures. In terms of model structure, the K-means++ algorithm is used to cluster the priori boxes for obtaining the more appropriate size parameters of the priori boxes. The CBAM attention mechanism and SPP (spatial pyramid pooling layer) structure are added to YOLOv4 model to improve the efficiency of network learning. The dataset for network training is designed for the human–computer interaction in the aerospace space. To reduce the redundant features of the captured images and enhance the effect of model training, the Wiener filter and bilateral filter are superimposed on the blurred images in the dataset to simply remove the motion blur. The augmentation of the model is executed by imitating different environments. A YOLOv4-gesture model is built, which collaborates with K-means++ algorithm, the CBAM and SPP mechanism. A DeblurGanv2 model is built to process the input images of the YOLOv4 target recognition. The YOLOv4-motion-blur-gesture model is composed of the YOLOv4-gesture and the DeblurGanv2. The augmented and enhanced gesture data set is used to simulate the model training. The experimental results demonstrate that the YOLOv4-motion-blur-gesture model has relatively better performance. The proposed model has the high inclusiveness and accuracy recognition effect in the real-time interaction of motion blur gestures, it improves the network training speed by 30%, the target detection accuracy by 10%, and the value of mAP by about 10%. The constructed YOLOv4-motion-blur-gesture model has a stable performance. It can not only meet the real-time human–computer interaction in aerospace space under real-time complex conditions, but also can be applied to other application environments under complex backgrounds requiring real-time detection.

2020 ◽  
Vol 10 (2) ◽  
pp. 722 ◽  
Author(s):  
Dinh-Son Tran ◽  
Ngoc-Huynh Ho ◽  
Hyung-Jeong Yang ◽  
Eu-Tteum Baek ◽  
Soo-Hyung Kim ◽  
...  

Using hand gestures is a natural method of interaction between humans and computers. We use gestures to express meaning and thoughts in our everyday conversations. Gesture-based interfaces are used in many applications in a variety of fields, such as smartphones, televisions (TVs), video gaming, and so on. With advancements in technology, hand gesture recognition is becoming an increasingly promising and attractive technique in human–computer interaction. In this paper, we propose a novel method for fingertip detection and hand gesture recognition in real-time using an RGB-D camera and a 3D convolution neural network (3DCNN). This system can accurately and robustly extract fingertip locations and recognize gestures in real-time. We demonstrate the accurateness and robustness of the interface by evaluating hand gesture recognition across a variety of gestures. In addition, we develop a tool to manipulate computer programs to show the possibility of using hand gesture recognition. The experimental results showed that our system has a high level of accuracy of hand gesture recognition. This is thus considered to be a good approach to a gesture-based interface for human–computer interaction by hand in the future.


2018 ◽  
Vol 15 (02) ◽  
pp. 1750022 ◽  
Author(s):  
Jing Li ◽  
Jianxin Wang ◽  
Zhaojie Ju

Gesture recognition plays an important role in human–computer interaction. However, most existing methods are complex and time-consuming, which limit the use of gesture recognition in real-time environments. In this paper, we propose a static gesture recognition system that combines depth information and skeleton data to classify gestures. Through feature fusion, hand digit gestures of 0–9 can be recognized accurately and efficiently. According to the experimental results, the proposed gesture recognition system is effective and robust, which is invariant to complex background, illumination changes, reversal, structural distortion, rotation, etc. We have tested the system both online and offline which proved that our system is satisfactory to real-time requirements, and therefore it can be applied to gesture recognition in real-world human–computer interaction systems.


Author(s):  
André Baltazar ◽  
Luís Gustavo Martins

Computer programming is not an easy task, and as with all difficult tasks, it can be faced as tedious, impossible to do, or as a challenge. Therefore, learning to program with a purpose enables that “challenge mindset” and encourages the student to apply himself in overcoming his handicaps and exploring different theories and methods to achieve his goal. This chapter describes the process of programming a framework with the purpose of achieving real time human gesture recognition. Just this is already a good challenge, but the ultimate goal is to enable new ways of Human-Computer Interaction through expressive gestures and to allow a performer the possibility of controlling (with his gestures), in real time, creative artistic events. The chapter starts with a review on human gesture recognition. Then it presents the framework architecture, its main modules, and algorithms. It closes with the description of two artistic applications using the ZatLab framework.


Sign in / Sign up

Export Citation Format

Share Document