Learning sparse representations for view-independent human action recognition based on fuzzy distances

2013 ◽  
Vol 121 ◽  
pp. 344-353 ◽  
Author(s):  
Alexandros Iosifidis ◽  
Anastasios Tefas ◽  
Ioannis Pitas
2015 ◽  
Vol 2015 ◽  
pp. 1-6
Author(s):  
Zhong Zhang ◽  
Shuang Liu

Human action recognition in wireless sensor networks (WSN) is an attractive direction due to its wide applications. However, human actions captured from different sensor nodes in WSN show different views, and the performance of classifier tends to degrade sharply. In this paper, we focus on the issue of cross-view action recognition in WSN and propose a novel algorithm named discriminative transferable sparse coding (DTSC) to overcome the drawback. We learn the sparse representation with an explicit discriminative goal, making the proposed method suitable for recognition. Furthermore, we simultaneously learn the dictionaries from different sensor nodes such that the same actions from different sensor nodes have similar sparse representations. Our method is verified on the IXMAS datasets, and the experimental results demonstrate that our method achieves better results than that of previous methods on cross-view action recognition in WSN.


2013 ◽  
Vol 18 (2-3) ◽  
pp. 49-60 ◽  
Author(s):  
Damian Dudzńiski ◽  
Tomasz Kryjak ◽  
Zbigniew Mikrut

Abstract In this paper a human action recognition algorithm, which uses background generation with shadow elimination, silhouette description based on simple geometrical features and a finite state machine for recognizing particular actions is described. The performed tests indicate that this approach obtains a 81 % correct recognition rate allowing real-time image processing of a 360 X 288 video stream.


2018 ◽  
Vol 6 (10) ◽  
pp. 323-328
Author(s):  
K.Kiruba . ◽  
D. Shiloah Elizabeth ◽  
C Sunil Retmin Raj

ROBOT ◽  
2012 ◽  
Vol 34 (6) ◽  
pp. 745 ◽  
Author(s):  
Bin WANG ◽  
Yuanyuan WANG ◽  
Wenhua XIAO ◽  
Wei WANG ◽  
Maojun ZHANG

2021 ◽  
Vol 11 (11) ◽  
pp. 4940
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

The field of research related to video data has difficulty in extracting not only spatial but also temporal features and human action recognition (HAR) is a representative field of research that applies convolutional neural network (CNN) to video data. The performance for action recognition has improved, but owing to the complexity of the model, some still limitations to operation in real-time persist. Therefore, a lightweight CNN-based single-stream HAR model that can operate in real-time is proposed. The proposed model extracts spatial feature maps by applying CNN to the images that develop the video and uses the frame change rate of sequential images as time information. Spatial feature maps are weighted-averaged by frame change, transformed into spatiotemporal features, and input into multilayer perceptrons, which have a relatively lower complexity than other HAR models; thus, our method has high utility in a single embedded system connected to CCTV. The results of evaluating action recognition accuracy and data processing speed through challenging action recognition benchmark UCF-101 showed higher action recognition accuracy than the HAR model using long short-term memory with a small amount of video frames and confirmed the real-time operational possibility through fast data processing speed. In addition, the performance of the proposed weighted mean-based HAR model was verified by testing it in Jetson NANO to confirm the possibility of using it in low-cost GPU-based embedded systems.


Sign in / Sign up

Export Citation Format

Share Document