dynamic gesture recognition
Recently Published Documents


TOTAL DOCUMENTS

140
(FIVE YEARS 60)

H-INDEX

11
(FIVE YEARS 3)

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Po Zhang ◽  
Junqiang Lin ◽  
Jianhua He ◽  
Xiuchan Rong ◽  
Chengen Li ◽  
...  

The agricultural machinery experiment is restricted by the crop production season. Missing the crop growth cycle will extend the machine development period. The use of virtual reality technology to complete preassembly and preliminary experiments can reduce the loss caused by this problem. To improve the intelligence and stability of virtual assembly, this paper proposed a more stable dynamic gesture cognition framework: the TCP/IP protocol constituted the network communication terminal, the leap motion-based vision system constituted the gesture data collection terminal, and the CNN-LSTM network constituted the dynamic gesture recognition classification terminal. The dynamic gesture recognition framework and the harvester virtual assembly platform formed a virtual assembly system to achieve gesture interaction. Through experimental analysis, the improved CNN-LSTM network had a small volume and could quickly establish a stable and accurate gesture recognition model with an average accuracy of 98.0% (±0.894). The assembly efficiency of the virtual assembly system with the framework was improved by approximately 15%. The results showed that the accuracy and stability of this model met the requirements, the corresponding assembly parts were robust in the virtual simulation environment of the whole machine, and the harvesting behaviour in the virtual reality scene was close to the real scene. The virtual assembly system under this framework provided technical support for unmanned farms and virtual experiments on agricultural machinery.


Author(s):  
Zhiwen Yang ◽  
Du Jiang ◽  
Ying Sun ◽  
Bo Tao ◽  
Xiliang Tong ◽  
...  

Gesture recognition technology is widely used in the flexible and precise control of manipulators in the assisted medical field. Our MResLSTM algorithm can effectively perform dynamic gesture recognition. The result of surface EMG signal decoding is applied to the controller, which can improve the fluency of artificial hand control. Much current gesture recognition research using sEMG has focused on static gestures. In addition, the accuracy of recognition depends on the extraction and selection of features. However, Static gesture research cannot meet the requirements of natural human-computer interaction and dexterous control of manipulators. Therefore, a multi-stream residual network (MResLSTM) is proposed for dynamic hand movement recognition. This study aims to improve the accuracy and stability of dynamic gesture recognition. Simultaneously, it can also advance the research on the smooth control of the Manipulator. We combine the residual model and the convolutional short-term memory model into a unified framework. The architecture extracts spatiotemporal features from two aspects: global and deep, and combines feature fusion to retain essential information. The strategy of pointwise group convolution and channel shuffle is used to reduce the number of network calculations. A dataset is constructed containing six dynamic gestures for model training. The experimental results show that on the same recognition model, the gesture recognition effect of fusion of sEMG signal and acceleration signal is better than that of only using sEMG signal. The proposed approach obtains competitive performance on our dataset with the recognition accuracies of 93.52%, achieving state-of-the-art performance with 89.65% precision on the Ninapro DB1 dataset. Our bionic calculation method is applied to the controller, which can realize the continuity of human-computer interaction and the flexibility of manipulator control.


2021 ◽  
Vol 11 (21) ◽  
pp. 9789
Author(s):  
Jiaqi Dong ◽  
Zeyang Xia ◽  
Qunfei Zhao

Augmented reality assisted assembly training (ARAAT) is an effective and affordable technique for labor training in the automobile and electronic industry. In general, most tasks of ARAAT are conducted by real-time hand operations. In this paper, we propose an algorithm of dynamic gesture recognition and prediction that aims to evaluate the standard and achievement of the hand operations for a given task in ARAAT. We consider that the given task can be decomposed into a series of hand operations and furthermore each hand operation into several continuous actions. Then, each action is related with a standard gesture based on the practical assembly task such that the standard and achievement of the actions included in the operations can be identified and predicted by the sequences of gestures instead of the performance throughout the whole task. Based on the practical industrial assembly, we specified five typical tasks, three typical operations, and six standard actions. We used Zernike moments combined histogram of oriented gradient and linear interpolation motion trajectories to represent 2D static and 3D dynamic features of standard gestures, respectively, and chose the directional pulse-coupled neural network as the classifier to recognize the gestures. In addition, we defined an action unit to reduce the dimensions of features and computational cost. During gesture recognition, we optimized the gesture boundaries iteratively by calculating the score probability density distribution to reduce interferences of invalid gestures and improve precision. The proposed algorithm was evaluated on four datasets and proved to increase recognition accuracy and reduce the computational cost from the experimental results.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6724
Author(s):  
Tsung-Han Tsai ◽  
Yih-Ru Tsai

With advancements in technology, more and more research is being focused on enhancing daily life quality and convenience. Along with the increase in the development of gesture control systems, many controllers, such as the keyboard, mouse, and other devices, have been replaced with remote control products, which are gradually becoming more intuitive for users. However, vision-based hand gesture recognition systems still have many problems to overcome. Most hand detection methods adopt a skin filter or motion filter for pre-processing. However, in a noisy environment, it is not easy to correctly extract interesting objects. In this paper, a VLSI design with dual-cameras has been proposed to construct a depth map with a stereo matching algorithm and recognize hand gestures. The proposed system adopts an adaptive depth filter to separate interesting foreground objects from the background. We also propose dynamic gesture recognition using depth and coordinate information. The system can perform static and dynamic gesture recognition. The ASIC design is implemented in TSMC 90 nm with about 47.3 K gate counts, and 27.8 mW of power consumption. The average accuracy of each gesture recognition is 83.98%.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5900
Author(s):  
Bo Tao ◽  
Licheng Huang ◽  
Haoyi Zhao ◽  
Gongfa Li ◽  
Xiliang Tong

The similar analysis of time sequence images to achieve image matching is a foundation of tasks in dynamic environments, such as multi-object tracking and dynamic gesture recognition. Therefore, we propose a matching method of time sequence images based on the Siamese network. Inspired by comparative learning, two different comparative parts are designed and embedded in the network. The first part makes a comparison between the input image pairs to generate the correlation matrix. The second part compares the correlation matrix, which is the output of the first comparison part, with a template, in order to calculate the similarity. The improved loss function is used to constrain the image matching and similarity calculation. After experimental verification, we found that it not only performs better, but also has some ability to estimate the camera pose.


2021 ◽  
Vol 2021 ◽  
pp. 1-5
Author(s):  
Zhao Feng ◽  
Jinlong Wu ◽  
Taile Ni

Objective. To explore the research and application of multifeature gesture recognition in virtual reality human-computer interaction and to explore the gesture recognition technology scheme to achieve better human-computer interaction experience. Methods. Through the study of the technical difficulties of gesture recognition, comparative static gesture feature recognition and feature fusion algorithms are applied, in the process of research on gesture partition, and adjust the contrast of characteristic parameters, combined with the feature of space-time dynamic gesture tracking trajectory and dynamic gesture recognition and gesture recognition effect under different scheme. Results. The central region was divided into 0 regions, and the central region was divided into 1-4 regions in counterclockwise direction. Compared with the traditional gesture changes, the overlapping problem in the four partition modes was reduced, the gesture was better displayed, and the operation and use of gesture processing were realized more efficiently. Conclusion. Gesture recognition requires the combination of static gesture feature information recognition, gesture feature fusion, spatiotemporal trajectory feature, and dynamic gesture trajectory feature to achieve a better human-computer interaction experience.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yuting Liu ◽  
Du Jiang ◽  
Haojie Duan ◽  
Ying Sun ◽  
Gongfa Li ◽  
...  

Gesture recognition is one of the important ways of human-computer interaction, which is mainly detected by visual technology. The temporal and spatial features are extracted by convolution of the video containing gesture. However, compared with the convolution calculation of a single image, multiframe image of dynamic gestures has more computation, more complex feature extraction, and more network parameters, which affects the recognition efficiency and real-time performance of the model. To solve above problems, a dynamic gesture recognition model based on CBAM-C3D is proposed. Key frame extraction technology, multimodal joint training, and network optimization with BN layer are used for making the network performance better. The experiments show that the recognition accuracy of the proposed 3D convolutional neural network combined with attention mechanism reaches 72.4% on EgoGesture dataset, which is improved greatly compared with the current main dynamic gesture recognition methods, and the effectiveness of the proposed algorithm is verified.


2021 ◽  
pp. 105219
Author(s):  
Yong-Liang Zhang ◽  
Qiang Li ◽  
Hui Zhang ◽  
Wei-Zhen Wang ◽  
Jun Han ◽  
...  

2021 ◽  
Vol 48 (7) ◽  
pp. 781-789
Author(s):  
Jaeyeong Ryu ◽  
Adithya B ◽  
Ashok Kumar Patil ◽  
Youngho Chai

Sign in / Sign up

Export Citation Format

Share Document