Human Activity Recognition Algorithm in Video Sequences Based on Integration of Magnitude and Orientation Information of Optical Flow

Author(s):  
Arati Kushwaha ◽  
Ashish Khare ◽  
Manish Khare

Human activity recognition from video sequences has emerged recently as pivotal research area due to its importance in a large number of applications such as real-time surveillance monitoring, healthcare, smart homes, security, behavior analysis, and many more. However, lots of challenges also exist such as intra-class variations, object occlusion, varying illumination condition, complex background, camera motion, etc. In this work, we introduce a novel feature descriptor based on the integration of magnitude and orientation information of optical flow and histogram of oriented gradients which gives an efficient and robust feature vector for the recognition of human activities for real-world environment. In the proposed approach first we computed magnitude and orientation of the optical flow separately then a local-oriented histogram of magnitude and orientation of motion flow vectors are computed using histogram of oriented gradients followed by linear combination feature fusion strategy. The resultant features are then processed by a multiclass Support Vector Machine (SVM) classifier for activity recognition. The experimental results are performed over different publically available benchmark video datasets such as UT interaction, CASIA, and HMDB51 datasets. The effectiveness of the proposed approach is evaluated in terms of six different performance parameters such as accuracy, precision, recall, specificity, [Formula: see text]-measure, and Matthew’s correlation coefficient (MCC). To show the significance of the proposed method, it is compared with the other state-of-the-art methods. The experimental result shows that the proposed method performs well in comparison to other state-of-the-art methods.




Author(s):  
Swati Nigam ◽  
Rajiv Singh ◽  
A. K. Misra

Computer vision techniques are capable of detecting human behavior from video sequences. Several state-of-the-art techniques have been proposed for human behavior detection and analysis. However, a collective framework is always required for intelligent human behavior analysis. Therefore, in this chapter, the authors provide a comprehensive understanding towards human behavior detection approaches. The framework of this chapter is based on human detection, human tracking, and human activity recognition, as these are the basic steps of human behavior detection process. The authors provide a detailed discussion over the human behavior detection framework and discuss the feature-descriptor-based approach. Furthermore, they have provided qualitative and quantitative analysis for the detection framework and demonstrate the results for human detection, human tracking, and human activity recognition.



Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1888
Author(s):  
Malek Boujebli ◽  
Hassen Drira ◽  
Makram Mestiri ◽  
Imed Riadh Farah

Human activity recognition is one of the most challenging and active areas of research in the computer vision domain. However, designing automatic systems that are robust to significant variability due to object combinations and the high complexity of human motions are more challenging. In this paper, we propose to model the inter-frame rigid evolution of skeleton parts as the trajectory in the Lie group SE(3)×…×SE(3). The motion of the object is similarly modeled as an additional trajectory in the same manifold. The classification is performed based on a rate-invariant comparison of the resulting trajectories mapped to a vector space, the Lie algebra. Experimental results on three action and activity datasets show that the proposed method outperforms various state-of-the-art human activity recognition approaches.



2020 ◽  
Vol 100 ◽  
pp. 107140 ◽  
Author(s):  
Hazar Mliki ◽  
Fatma Bouhlel ◽  
Mohamed Hammami


2019 ◽  
Vol 5 (1) ◽  
pp. 1-9
Author(s):  
Mohammad Iqbal ◽  
Chandrawati Putri Wulandari ◽  
Wawan Yunanto ◽  
Ghaluh Indah Permata Sari

Discovering rare human activity patterns—from triggered motion sensors deliver peculiar information to notify people about hazard situations. This study aims to recognize rare human activities using mining non-zero-rare sequential patterns technique. In particular, this study mines the triggered motion sensor sequences to obtain non-zero-rare human activity patterns—the patterns which most occur in the motion sensor sequences and the occurrence numbers are less than the pre-defined occurrence threshold. This study proposes an algorithm to mine non-zero-rare pattern on human activity recognition called Mining Multi-class Non-Zero-Rare Sequential Patterns (MMRSP).  The experimental result showed that non-zero-rare human activity patterns succeed to capture the unusual activity. Furthermore, the MMRSP performed well according to the precision value of rare activities.



Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8337
Author(s):  
Hyeokhyen Kwon ◽  
Gregory D. Abowd ◽  
Thomas Plötz

Supervised training of human activity recognition (HAR) systems based on body-worn inertial measurement units (IMUs) is often constrained by the typically rather small amounts of labeled sample data. Systems like IMUTube have been introduced that employ cross-modality transfer approaches to convert videos of activities of interest into virtual IMU data. We demonstrate for the first time how such large-scale virtual IMU datasets can be used to train HAR systems that are substantially more complex than the state-of-the-art. Complexity is thereby represented by the number of model parameters that can be trained robustly. Our models contain components that are dedicated to capture the essentials of IMU data as they are of relevance for activity recognition, which increased the number of trainable parameters by a factor of 1100 compared to state-of-the-art model architectures. We evaluate the new model architecture on the challenging task of analyzing free-weight gym exercises, specifically on classifying 13 dumbbell execises. We have collected around 41 h of virtual IMU data using IMUTube from exercise videos available from YouTube. The proposed model is trained with the large amount of virtual IMU data and calibrated with a mere 36 min of real IMU data. The trained model was evaluated on a real IMU dataset and we demonstrate the substantial performance improvements of 20% absolute F1 score compared to the state-of-the-art convolutional models in HAR.



Sign in / Sign up

Export Citation Format

Share Document