Joint Transferable Dictionary Learning and View Adaptation for Multi-view Human Action Recognition

2021 ◽  
Vol 15 (2) ◽  
pp. 1-23
Author(s):  
Bin Sun ◽  
Dehui Kong ◽  
Shaofan Wang ◽  
Lichun Wang ◽  
Baocai Yin

Multi-view human action recognition remains a challenging problem due to large view changes. In this article, we propose a transfer learning-based framework called transferable dictionary learning and view adaptation (TDVA) model for multi-view human action recognition. In the transferable dictionary learning phase, TDVA learns a set of view-specific transferable dictionaries enabling the same actions from different views to share the same sparse representations, which can transfer features of actions from different views to an intermediate domain. In the view adaptation phase, TDVA comprehensively analyzes global, local, and individual characteristics of samples, and jointly learns balanced distribution adaptation, locality preservation, and discrimination preservation, aiming at transferring sparse features of actions of different views from the intermediate domain to a common domain. In other words, TDVA progressively bridges the distribution gap among actions from various views by these two phases. Experimental results on IXMAS, ACT4 2 , and NUCLA action datasets demonstrate that TDVA outperforms state-of-the-art methods.

Sparse representation is an emerging topic among researchers. The method to represent the huge volume of dense data as sparse data is much needed for various fields such as classification, compression and signal denoising. The base of the sparse representation is dictionary learning. In most of the dictionary learning approaches, the dictionary is learnt based on the input training signals which consumes more time. To solve this issue, the shift-invariant dictionary is used for action recognition in this work. Shift-Invariant Dictionary (SID) is that the dictionary is constructed in the initial stage with shift-invariance of initial atoms. The advantage of the proposed SID based action recognition method is that it requires minimum training time and achieves highest accuracy.


2019 ◽  
Vol 6 (6) ◽  
pp. 9280-9293 ◽  
Author(s):  
Zan Gao ◽  
Hai-Zhen Xuan ◽  
Hua Zhang ◽  
Shaohua Wan ◽  
Kim-Kwang Raymond Choo

2013 ◽  
Vol 18 (2-3) ◽  
pp. 49-60 ◽  
Author(s):  
Damian Dudzńiski ◽  
Tomasz Kryjak ◽  
Zbigniew Mikrut

Abstract In this paper a human action recognition algorithm, which uses background generation with shadow elimination, silhouette description based on simple geometrical features and a finite state machine for recognizing particular actions is described. The performed tests indicate that this approach obtains a 81 % correct recognition rate allowing real-time image processing of a 360 X 288 video stream.


2018 ◽  
Vol 6 (10) ◽  
pp. 323-328
Author(s):  
K.Kiruba . ◽  
D. Shiloah Elizabeth ◽  
C Sunil Retmin Raj

ROBOT ◽  
2012 ◽  
Vol 34 (6) ◽  
pp. 745 ◽  
Author(s):  
Bin WANG ◽  
Yuanyuan WANG ◽  
Wenhua XIAO ◽  
Wei WANG ◽  
Maojun ZHANG

Sign in / Sign up

Export Citation Format

Share Document