scholarly journals ActionFlowNet: Learning Motion Representation for Action Recognition

Author(s):  
Joe Yue-Hei Ng ◽  
Jonghyun Choi ◽  
Jan Neumann ◽  
Larry S. Davis
Author(s):  
Songrui Guo ◽  
Huawei Pan ◽  
Guanghua Tan ◽  
Lin Chen ◽  
Chunming Gao

Human action recognition is very important and significant research work in numerous fields of science, for example, human–computer interaction, computer vision and crime analysis. In recent years, relative geometry features have been widely applied to the description of relative relation of body motion. It brings many benefits to action recognition such as clear description, abundant features etc. But the obvious disadvantage is that the extracted features severely rely on the local coordinate system. It is difficult to find a bijection between relative geometry and skeleton motion. To overcome this problem, many previous methods use relative rotation and translation between all skeleton pairs to increase robustness. In this paper we present a new motion representation method. It establishes a motion model based on the relative geometry with the aid of special orthogonal group SO(3). At the same time, we proved that this motion representation method can establish a bijection between relative geometry and motion of skeleton pairs. After the motion representation method in this paper is used, the computation cost of action recognition reduces from the two-way relative motion (motion from A to B and B to A) to one-way relative motion (motion from A to B or B to A) between any skeleton pair, namely, permutation problem [Formula: see text] is simplified into combinatorics problem [Formula: see text]. Finally, the experimental results of the three motion datasets are all superior to present skeleton-based action recognition methods.


2020 ◽  
Vol 34 (07) ◽  
pp. 12918-12925
Author(s):  
Yiyi Zhang ◽  
Li Niu ◽  
Ziqi Pan ◽  
Meichao Luo ◽  
Jianfu Zhang ◽  
...  

Static image action recognition, which aims to recognize action based on a single image, usually relies on expensive human labeling effort such as adequate labeled action images and large-scale labeled image dataset. In contrast, abundant unlabeled videos can be economically obtained. Therefore, several works have explored using unlabeled videos to facilitate image action recognition, which can be categorized into the following two groups: (a) enhance visual representations of action images with a designed proxy task on unlabeled videos, which falls into the scope of self-supervised learning; (b) generate auxiliary representations for action images with the generator learned from unlabeled videos. In this paper, we integrate the above two strategies in a unified framework, which consists of Visual Representation Enhancement (VRE) module and Motion Representation Augmentation (MRA) module. Specifically, the VRE module includes a proxy task which imposes pseudo motion label constraint and temporal coherence constraint on unlabeled videos, while the MRA module could predict the motion information of a static action image by exploiting unlabeled videos. We demonstrate the superiority of our framework based on four benchmark human action datasets with limited labeled data.


Sign in / Sign up

Export Citation Format

Share Document