Human Action Recognition: A Dense Trajectory and Similarity Constrained Latent Support Vector Machine Approach

Author(s):  
Sio-Long Lo ◽  
Ah-Chung Tsoi
2019 ◽  
Vol 9 (10) ◽  
pp. 2126 ◽  
Author(s):  
Suge Dong ◽  
Daidi Hu ◽  
Ruijun Li ◽  
Mingtao Ge

Aimed at the problems of high redundancy of trajectory and susceptibility to background interference in traditional dense trajectory behavior recognition methods, a human action recognition method based on foreground trajectory and motion difference descriptors is proposed. First, the motion magnitude of each frame is estimated by optical flow, and the foreground region is determined according to each motion magnitude of the pixels; the trajectories are only extracted from behavior-related foreground regions. Second, in order to better describe the relative temporal information between different actions, a motion difference descriptor is introduced to describe the foreground trajectory, and the direction histogram of the motion difference is constructed by calculating the direction information of the motion difference per unit time of the trajectory point. Finally, a Fisher vector (FV) is used to encode histogram features to obtain video-level action features, and a support vector machine (SVM) is utilized to classify the action category. Experimental results show that this method can better extract the action-related trajectory, and it can improve the recognition accuracy by 7% compared to the traditional dense trajectory method.


2014 ◽  
Vol 889-890 ◽  
pp. 1057-1064
Author(s):  
Rui Feng Li ◽  
Liang Liang Wang ◽  
Teng Fei Zhang

Human action often requires a large volume and computation-consuming representation for an accurate recognition with good diversity as the large complexity and variability of actions and scenarios. In this paper, an efficiency combined action representation approach is proposed to deal with the dilemma between accuracy and diversity. Two action features are extracted for combination from a Kinect sensor: silhouette and 3D message. An improved Histograms of Gradient named Interest-HOG is proposed for silhouette representation while the feature angles between skeleton points are calculated as the 3D representation. Kernel Principle Componet Analysis (KPCA) is also applied bidirectionally in our work to process the Interest-HOG descriptor for getting a concise and normative vector whose volume is same as the 3D one aimed at a successful combining. A depth dataset named DS&SP including 10 kinds of actions performed by 12 persons in 4 scenarios is built as the benchmark for our approach based on which Support Vector Machine (SVM) is employed for training and testing. Experimental results show that our approach has good performance in accuracy, efficiency and robustness of self-occlusion.


2022 ◽  
Vol 2022 ◽  
pp. 1-18
Author(s):  
Chao Tang ◽  
Anyang Tong ◽  
Aihua Zheng ◽  
Hua Peng ◽  
Wei Li

The traditional human action recognition (HAR) method is based on RGB video. Recently, with the introduction of Microsoft Kinect and other consumer class depth cameras, HAR based on RGB-D (RGB-Depth) has drawn increasing attention from scholars and industry. Compared with the traditional method, the HAR based on RGB-D has high accuracy and strong robustness. In this paper, using a selective ensemble support vector machine to fuse multimodal features for human action recognition is proposed. The algorithm combines the improved HOG feature-based RGB modal data, the depth motion map-based local binary pattern features (DMM-LBP), and the hybrid joint features (HJF)-based joints modal data. Concomitantly, a frame-based selective ensemble support vector machine classification model (SESVM) is proposed, which effectively integrates the selective ensemble strategy with the selection of SVM base classifiers, thus increasing the differences between the base classifiers. The experimental results have demonstrated that the proposed method is simple, fast, and efficient on public datasets in comparison with other action recognition algorithms.


Sign in / Sign up

Export Citation Format

Share Document