The human action recognition system and its relationship to Broca’s area: an fMRI study

NeuroImage ◽  
2003 ◽  
Vol 19 (3) ◽  
pp. 637-644 ◽  
Author(s):  
Farsin Hamzei ◽  
Michel Rijntjes ◽  
Christian Dettmers ◽  
Volkmar Glauche ◽  
Cornelius Weiller ◽  
...  
2015 ◽  
Vol 42 (1) ◽  
pp. 138-143
Author(s):  
ByoungChul Ko ◽  
Mincheol Hwang ◽  
Jae-Yeal Nam

Author(s):  
MARC BOSCH-JORGE ◽  
ANTONIO-JOSÉ SÁNCHEZ-SALMERÓN ◽  
CARLOS RICOLFE-VIALA

The aim of this work is to present a visual-based human action recognition system which is adapted to constrained embedded devices, such as smart phones. Basically, vision-based human action recognition is a combination of feature-tracking, descriptor-extraction and subsequent classification of image representations, with a color-based identification tool to distinguish between multiple human subjects. Simple descriptors sets were evaluated to optimize recognition rate and performance and two dimensional (2D) descriptors were found to be effective. These sets installed on the latest phones can recognize human actions in videos in less than one second with a success rate of over 82%.


2012 ◽  
Vol 22 (06) ◽  
pp. 1250028 ◽  
Author(s):  
K. SUBRAMANIAN ◽  
S. SURESH

We propose a sequential Meta-Cognitive learning algorithm for Neuro-Fuzzy Inference System (McFIS) to efficiently recognize human actions from video sequence. Optical flow information between two consecutive image planes can represent actions hierarchically from local pixel level to global object level, and hence are used to describe the human action in McFIS classifier. McFIS classifier and its sequential learning algorithm is developed based on the principles of self-regulation observed in human meta-cognition. McFIS decides on what-to-learn, when-to-learn and how-to-learn based on the knowledge stored in the classifier and the information contained in the new training samples. The sequential learning algorithm of McFIS is controlled and monitored by the meta-cognitive components which uses class-specific, knowledge based criteria along with self-regulatory thresholds to decide on one of the following strategies: (i) Sample deletion (ii) Sample learning and (iii) Sample reserve. Performance of proposed McFIS based human action recognition system is evaluated using benchmark Weizmann and KTH video sequences. The simulation results are compared with well known SVM classifier and also with state-of-the-art action recognition results reported in the literature. The results clearly indicates McFIS action recognition system achieves better performances with minimal computational effort.


2017 ◽  
Vol 7 (1.1) ◽  
pp. 489 ◽  
Author(s):  
P V.V. Kishore ◽  
P Siva Kameswari ◽  
K Niharika ◽  
M Tanuja ◽  
M Bindu ◽  
...  

Human action recognition is a vibrant area of research with multiple application areas in human machine interface. In this work, we propose a human action recognition based on spatial graph kernels on 3D skeletal data. Spatial joint features are extracted using joint distances between human joint distributions in 3D space. A spatial graph is constructed using 3D points as vertices and the computed joint distances as edges for each action frame in the video sequence. Spatial graph kernels between the training set and testing set are constructed to extract similarity between the two action sets. Two spatial graph kernels are constructed with vertex and edge data represented by joint positions and joint distances. To test the proposed method, we use 4 publicly available 3D skeletal datasets from G3D, MSR Action 3D, UT Kinect and NTU RGB+D. The proposed spatial graph kernels result in better classification accuracies compared to the state of the art models.


2021 ◽  
Author(s):  
Akila.K

Abstract Background: Human action recognition encompasses a scope for an automatic analysis of current events from video and has varied applications in multi-various fields. Recognizing and understanding of human actions from videos still remains a difficult downside as a result of the massive variations in human look, posture and body size inside identical category.Objective: This paper focuses on a specific issue related to inter-class variation in Human Action Recognition.Approach: To discriminate the human actions among the category, a novel approach which is based on wavelet packet transformation for feature extraction. As we are concentrating on classifying similar actions non-linearity among the features are analyzed and discriminated by Deterministic Normalized - Linear Discriminant Analysis (DN-LDA). However the major part of the recognition system relays on classification part and the dynamic feeds are classified by Hidden Markov Model at the final stage based on rule set..Conclusion: Experiments results have shown that the proposed approach is discriminative for similar human action recognition and well adapted to the inter-class variation


2020 ◽  
Vol 29 (12) ◽  
pp. 2050190
Author(s):  
Amel Ben Mahjoub ◽  
Mohamed Atri

Action recognition is a very effective method of computer vision areas. In the last few years, there has been a growing interest in Deep learning networks as the Long Short–Term Memory (LSTM) architectures due to their efficiency in long-term time sequence processing. In the light of these recent events in deep neural networks, there is now considerable concern about the development of an accurate action recognition approach with low complexity. This paper aims to introduce a method for learning depth activity videos based on the LSTM and the classification fusion. The first step consists in extracting compact depth video features. We start with the calculation of Depth Motion Maps (DMM) from each sequence. Then we encode and concatenate contour and texture DMM characteristics using the histogram-of-oriented-gradient and local-binary-patterns descriptors. The second step is the depth video classification based on the naive Bayes fusion approach. Training three classifiers, which are the collaborative representation classifier, the kernel-based extreme learning machine and the LSTM, is done separately to get classification scores. Finally, we fuse the classification score outputs of all classifiers with the naive Bayesian method to get a final predicted label. Our proposed method achieves a significant improvement in the recognition rate compared to previous work that has used Kinect v2 and UTD-MHAD human action datasets.


Sign in / Sign up

Export Citation Format

Share Document