Silhouette Pose Feature-Based Human Action Classification Using Capsule Network

2021 ◽  
Vol 14 (2) ◽  
pp. 106-124
Author(s):  
A. F. M. Saifuddin Saif ◽  
Md. Akib Shahriar Khan ◽  
Abir Mohammad Hadi ◽  
Rahul Proshad Karmoker ◽  
Joy Julian Gomes

Recent years have seen a rise in the use of various machine learning techniques in computer vision, particularly in posing feature-based human action recognition which includes convolutional neural networks (CNN) and recurrent neural network (RNN). CNN-based methods are useful in recognizing human actions for combined motions (i.e., standing up, hand shaking, walking). However, in case of uncertainty of camera motion, occlusion, and multiple people, CNN suppresses important feature information and is not efficient enough to recognize variations for human action. Besides, RNN with long short-term memory (LSTM) requires more computational power to retain memories to classify human actions. This research proposes an extended framework based on capsule network using silhouette pose features to recognize human actions. Proposed extended framework achieved high accuracy of 95.64% which is higher than previous research methodology. Extensive experimental validation of the proposed extended framework reveals efficiency which is expected to contribute significantly in action recognition research.

Drones ◽  
2019 ◽  
Vol 3 (4) ◽  
pp. 82 ◽  
Author(s):  
Asanka G. Perera ◽  
Yee Wei Law ◽  
Javaan Chahl

Aerial human action recognition is an emerging topic in drone applications. Commercial drone platforms capable of detecting basic human actions such as hand gestures have been developed. However, a limited number of aerial video datasets are available to support increased research into aerial human action analysis. Most of the datasets are confined to indoor scenes or object tracking and many outdoor datasets do not have sufficient human body details to apply state-of-the-art machine learning techniques. To fill this gap and enable research in wider application areas, we present an action recognition dataset recorded in an outdoor setting. A free flying drone was used to record 13 dynamic human actions. The dataset contains 240 high-definition video clips consisting of 66,919 frames. All of the videos were recorded from low-altitude and at low speed to capture the maximum human pose details with relatively high resolution. This dataset should be useful to many research areas, including action recognition, surveillance, situational awareness, and gait analysis. To test the dataset, we evaluated the dataset with a pose-based convolutional neural network (P-CNN) and high-level pose feature (HLPF) descriptors. The overall baseline action recognition accuracy calculated using P-CNN was 75.92%.


2020 ◽  
Vol 2020 ◽  
pp. 1-18
Author(s):  
Chao Tang ◽  
Huosheng Hu ◽  
Wenjian Wang ◽  
Wei Li ◽  
Hua Peng ◽  
...  

The representation and selection of action features directly affect the recognition effect of human action recognition methods. Single feature is often affected by human appearance, environment, camera settings, and other factors. Aiming at the problem that the existing multimodal feature fusion methods cannot effectively measure the contribution of different features, this paper proposed a human action recognition method based on RGB-D image features, which makes full use of the multimodal information provided by RGB-D sensors to extract effective human action features. In this paper, three kinds of human action features with different modal information are proposed: RGB-HOG feature based on RGB image information, which has good geometric scale invariance; D-STIP feature based on depth image, which maintains the dynamic characteristics of human motion and has local invariance; and S-JRPF feature-based skeleton information, which has good ability to describe motion space structure. At the same time, multiple K-nearest neighbor classifiers with better generalization ability are used to integrate decision-making classification. The experimental results show that the algorithm achieves ideal recognition results on the public G3D and CAD60 datasets.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Qiulin Wang ◽  
Baole Tao ◽  
Fulei Han ◽  
Wenting Wei

The extraction and recognition of human actions has always been a research hotspot in the field of state recognition. It has a wide range of application prospects in many fields. In sports, it can reduce the occurrence of accidental injuries and improve the training level of basketball players. How to extract effective features from the dynamic body movements of basketball players is of great significance. In order to improve the fairness of the basketball game, realize the accurate recognition of the athletes’ movements, and simultaneously improve the level of the athletes and regulate the movements of the athletes during training, this article uses deep learning to extract and recognize the movements of the basketball players. This paper implements human action recognition algorithm based on deep learning. This method automatically extracts image features through convolution kernels, which greatly improves the efficiency compared with traditional manual feature extraction methods. This method uses the deep convolutional neural network VGG model on the TensorFlow platform to extract and recognize human actions. On the Matlab platform, the KTH and Weizmann datasets are preprocessed to obtain the input image set. Then, the preprocessed dataset is used to train the model to obtain the optimal network model and corresponding data by testing the two datasets. Finally, the two datasets are analyzed in detail, and the specific cause of each action confusion is given. Simultaneously, the recognition accuracy and average recognition accuracy rates of each action category are calculated. The experimental results show that the human action recognition algorithm based on deep learning obtains a higher recognition accuracy rate.


2020 ◽  
pp. 1202-1214
Author(s):  
Riyadh Sahib Abdul Ameer ◽  
Mohammed Al-Taei

Human action recognition has gained popularity because of its wide applicability, such as in patient monitoring systems, surveillance systems, and a wide diversity of systems that contain interactions between people and electrical devices, including human computer interfaces. The proposed method includes sequential stages of object segmentation, feature extraction, action detection and then action recognition. Effective results of human actions using different features of unconstrained videos was a challenging task due to camera motion, cluttered background, occlusions, complexity of human movements, and variety of same actions performed by distinct subjects. Thus, the proposed method overcomes such problems by using the fusion of features concept for the development of a powerful human action descriptor. This descriptor is modified to create a visual word vocabulary (or codebook) which yields a Bag-of-Words representation. The True Positive Rate (TPR) and False Positive Rate (FPR) measures gave a true indication about the proposed HAR system. The computed Accuracy (Ar) and the Error (misclassification) Rate (Er) reveal the effectiveness of the system with the used dataset.


2021 ◽  
Author(s):  
Akila.K

Abstract Background: Human action recognition encompasses a scope for an automatic analysis of current events from video and has varied applications in multi-various fields. Recognizing and understanding of human actions from videos still remains a difficult downside as a result of the massive variations in human look, posture and body size inside identical category.Objective: This paper focuses on a specific issue related to inter-class variation in Human Action Recognition.Approach: To discriminate the human actions among the category, a novel approach which is based on wavelet packet transformation for feature extraction. As we are concentrating on classifying similar actions non-linearity among the features are analyzed and discriminated by Deterministic Normalized - Linear Discriminant Analysis (DN-LDA). However the major part of the recognition system relays on classification part and the dynamic feeds are classified by Hidden Markov Model at the final stage based on rule set..Conclusion: Experiments results have shown that the proposed approach is discriminative for similar human action recognition and well adapted to the inter-class variation


Inventions ◽  
2020 ◽  
Vol 5 (3) ◽  
pp. 49
Author(s):  
Nusrat Tasnim ◽  
Md. Mahbubul Islam ◽  
Joong-Hwan Baek

Human action recognition has turned into one of the most attractive and demanding fields of research in computer vision and pattern recognition for facilitating easy, smart, and comfortable ways of human-machine interaction. With the witnessing of massive improvements to research in recent years, several methods have been suggested for the discrimination of different types of human actions using color, depth, inertial, and skeleton information. Despite having several action identification methods using different modalities, classifying human actions using skeleton joints information in 3-dimensional space is still a challenging problem. In this paper, we conceive an efficacious method for action recognition using 3D skeleton data. First, large-scale 3D skeleton joints information was analyzed and accomplished some meaningful pre-processing. Then, a simple straight-forward deep convolutional neural network (DCNN) was designed for the classification of the desired actions in order to evaluate the effectiveness and embonpoint of the proposed system. We also conducted prior DCNN models such as ResNet18 and MobileNetV2, which outperform existing systems using human skeleton joints information.


2020 ◽  
Vol 29 (12) ◽  
pp. 2050190
Author(s):  
Amel Ben Mahjoub ◽  
Mohamed Atri

Action recognition is a very effective method of computer vision areas. In the last few years, there has been a growing interest in Deep learning networks as the Long Short–Term Memory (LSTM) architectures due to their efficiency in long-term time sequence processing. In the light of these recent events in deep neural networks, there is now considerable concern about the development of an accurate action recognition approach with low complexity. This paper aims to introduce a method for learning depth activity videos based on the LSTM and the classification fusion. The first step consists in extracting compact depth video features. We start with the calculation of Depth Motion Maps (DMM) from each sequence. Then we encode and concatenate contour and texture DMM characteristics using the histogram-of-oriented-gradient and local-binary-patterns descriptors. The second step is the depth video classification based on the naive Bayes fusion approach. Training three classifiers, which are the collaborative representation classifier, the kernel-based extreme learning machine and the LSTM, is done separately to get classification scores. Finally, we fuse the classification score outputs of all classifiers with the naive Bayesian method to get a final predicted label. Our proposed method achieves a significant improvement in the recognition rate compared to previous work that has used Kinect v2 and UTD-MHAD human action datasets.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 1993
Author(s):  
Malik Ali Gul ◽  
Muhammad Haroon Yousaf ◽  
Shah Nawaz ◽  
Zaka Ur Rehman ◽  
HyungWon Kim

Human action recognition has emerged as a challenging research domain for video understanding and analysis. Subsequently, extensive research has been conducted to achieve the improved performance for recognition of human actions. Human activity recognition has various real time applications, such as patient monitoring in which patients are being monitored among a group of normal people and then identified based on their abnormal activities. Our goal is to render a multi class abnormal action detection in individuals as well as in groups from video sequences to differentiate multiple abnormal human actions. In this paper, You Look only Once (YOLO) network is utilized as a backbone CNN model. For training the CNN model, we constructed a large dataset of patient videos by labeling each frame with a set of patient actions and the patient’s positions. We retrained the back-bone CNN model with 23,040 labeled images of patient’s actions for 32 epochs. Across each frame, the proposed model allocated a unique confidence score and action label for video sequences by finding the recurrent action label. The present study shows that the accuracy of abnormal action recognition is 96.8%. Our proposed approach differentiated abnormal actions with improved F1-Score of 89.2% which is higher than state-of-the-art techniques. The results indicate that the proposed framework can be beneficial to hospitals and elder care homes for patient monitoring.


2013 ◽  
Vol 859 ◽  
pp. 498-502 ◽  
Author(s):  
Zhi Qiang Wei ◽  
Ji An Wu ◽  
Xi Wang

In order to realize the identification of human daily actions, a method of identifying human daily actions is realized in this paper, which transforms this problem into converting human action recognition into analyzing feature sequence. Then the feature sequence combined with improved LCS algorithm could realize the human actions recognition. Data analysis and experimental results show the recognition rate of this method is high and speed is fast, and this applied technology will have broad prospects.


Sign in / Sign up

Export Citation Format

Share Document