scholarly journals Long-Short Temporal Modeling for Efficient Action Recognition

Author(s):  
Liyu Wu ◽  
Yuexian Zou ◽  
Can Zhang
Data ◽  
2020 ◽  
Vol 5 (4) ◽  
pp. 104
Author(s):  
Ashok Sarabu ◽  
Ajit Kumar Santra

The Two-stream convolution neural network (CNN) has proven a great success in action recognition in videos. The main idea is to train the two CNNs in order to learn spatial and temporal features separately, and two scores are combined to obtain final scores. In the literature, we observed that most of the methods use similar CNNs for two streams. In this paper, we design a two-stream CNN architecture with different CNNs for the two streams to learn spatial and temporal features. Temporal Segment Networks (TSN) is applied in order to retrieve long-range temporal features, and to differentiate the similar type of sub-action in videos. Data augmentation techniques are employed to prevent over-fitting. Advanced cross-modal pre-training is discussed and introduced to the proposed architecture in order to enhance the accuracy of action recognition. The proposed two-stream model is evaluated on two challenging action recognition datasets: HMDB-51 and UCF-101. The findings of the proposed architecture shows the significant performance increase and it outperforms the existing methods.


Author(s):  
Dongliang He ◽  
Zhichao Zhou ◽  
Chuang Gan ◽  
Fu Li ◽  
Xiao Liu ◽  
...  

Despite the success of deep learning for static image understanding, it remains unclear what are the most effective network architectures for spatial-temporal modeling in videos. In this paper, in contrast to the existing CNN+RNN or pure 3D convolution based approaches, we explore a novel spatialtemporal network (StNet) architecture for both local and global modeling in videos. Particularly, StNet stacks N successive video frames into a super-image which has 3N channels and applies 2D convolution on super-images to capture local spatial-temporal relationship. To model global spatialtemporal structure, we apply temporal convolution on the local spatial-temporal feature maps. Specifically, a novel temporal Xception block is proposed in StNet, which employs a separate channel-wise and temporal-wise convolution over the feature sequence of a video. Extensive experiments on the Kinetics dataset demonstrate that our framework outperforms several state-of-the-art approaches in action recognition and can strike a satisfying trade-off between recognition accuracy and model complexity. We further demonstrate the generalization performance of the leaned video representations on the UCF101 dataset.


The objective is to develop a time series image representation of the skeletal action data and use it for recognition through a convolutional long short-term deep learning framework. Consequently, Kinect captured human skeletal data is transformed into a Joint Change Distance Image (JCDI) descriptor which maps the time changes in the joints. Subsequently, JCDIs are decoded spatially well with a Convolutional (CNN). Temporal decomposition is executed on long short term memory (LSTM) with data changes along x , y and z position vectors of the skeleton. We propose a combination of CNN and LSTM which maps the spatio temporal information to generate a generalized time series features for recognition. Finally, scores are fused from spatially vibrant CNNs and temporally sound LSTMs for action recognition. Publicly available action datasets such as NTU RGBD, MSR Action, UTKinect and G3D were used as test inputs for experimentation. The results showed a better performance due to spatio temporal modeling at both the representation and the recognition stages when compared to other state-of-the-arts


2013 ◽  
Vol 18 (2-3) ◽  
pp. 49-60 ◽  
Author(s):  
Damian Dudzńiski ◽  
Tomasz Kryjak ◽  
Zbigniew Mikrut

Abstract In this paper a human action recognition algorithm, which uses background generation with shadow elimination, silhouette description based on simple geometrical features and a finite state machine for recognizing particular actions is described. The performed tests indicate that this approach obtains a 81 % correct recognition rate allowing real-time image processing of a 360 X 288 video stream.


2018 ◽  
Vol 6 (10) ◽  
pp. 323-328
Author(s):  
K.Kiruba . ◽  
D. Shiloah Elizabeth ◽  
C Sunil Retmin Raj

2019 ◽  
Author(s):  
Giacomo De Rossi ◽  
◽  
Nicola Piccinelli ◽  
Francesco Setti ◽  
Riccardo Muradore ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document