scholarly journals Remarkable Skeleton Based Human Action Recognition

2020 ◽  
pp. 109-122
Author(s):  
Sushma Jaiswal ◽  
Tarun Jaiswal

Skeleton-based human-action-recognition (SBHAR) has wide applications in cognitive science and automatic surveillance. However, the most challenging and crucial task of the skeleton-based human-action-recognition (SBHAR) is a significant view variation while capturing the data. In this area, a significant amount of satisfactory work has already been done, which include the Red Green Blue (RGB) data method. The performance of the SBHAR is also affected by the various factors such as video frame setting, view variations in motion, different backgrounds and inter-personal differences. In this survey, we explicitly address these challenges and provide a complete overview of advancement in this field. The deep learning method has been used in this field for a long time, but so far, no research has fully demonstrated its usefulness. In this paper, we first highlight the need for action recognition and significance of 3D skeleton data and finally, we survey the largest 3D skeleton dataset, i.e. NTU-RGB+D and its new version NTU-RGB+D 120.

2021 ◽  
Vol 11 (6) ◽  
pp. 2675
Author(s):  
Nusrat Tasnim ◽  
Mohammad Khairul Islam ◽  
Joong-Hwan Baek

Human activity recognition has become a significant research trend in the fields of computer vision, image processing, and human–machine or human–object interaction due to cost-effectiveness, time management, rehabilitation, and the pandemic of diseases. Over the past years, several methods published for human action recognition using RGB (red, green, and blue), depth, and skeleton datasets. Most of the methods introduced for action classification using skeleton datasets are constrained in some perspectives including features representation, complexity, and performance. However, there is still a challenging problem of providing an effective and efficient method for human action discrimination using a 3D skeleton dataset. There is a lot of room to map the 3D skeleton joint coordinates into spatio-temporal formats to reduce the complexity of the system, to provide a more accurate system to recognize human behaviors, and to improve the overall performance. In this paper, we suggest a spatio-temporal image formation (STIF) technique of 3D skeleton joints by capturing spatial information and temporal changes for action discrimination. We conduct transfer learning (pretrained models- MobileNetV2, DenseNet121, and ResNet18 trained with ImageNet dataset) to extract discriminative features and evaluate the proposed method with several fusion techniques. We mainly investigate the effect of three fusion methods such as element-wise average, multiplication, and maximization on the performance variation to human action recognition. Our deep learning-based method outperforms prior works using UTD-MHAD (University of Texas at Dallas multi-modal human action dataset) and MSR-Action3D (Microsoft action 3D), publicly available benchmark 3D skeleton datasets with STIF representation. We attain accuracies of approximately 98.93%, 99.65%, and 98.80% for UTD-MHAD and 96.00%, 98.75%, and 97.08% for MSR-Action3D skeleton datasets using MobileNetV2, DenseNet121, and ResNet18, respectively.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Jinyue Zhang ◽  
Lijun Zi ◽  
Yuexian Hou ◽  
Mingen Wang ◽  
Wenting Jiang ◽  
...  

In order to support smart construction, digital twin has been a well-recognized concept for virtually representing the physical facility. It is equally important to recognize human actions and the movement of construction equipment in virtual construction scenes. Compared to the extensive research on human action recognition (HAR) that can be applied to identify construction workers, research in the field of construction equipment action recognition (CEAR) is very limited, mainly due to the lack of available datasets with videos showing the actions of construction equipment. The contributions of this research are as follows: (1) the development of a comprehensive video dataset of 2,064 clips with five action types for excavators and dump trucks; (2) a new deep learning-based CEAR approach (known as a simplified temporal convolutional network or STCN) that combines a convolutional neural network (CNN) with long short-term memory (LSTM, an artificial recurrent neural network), where CNN is used to extract image features and LSTM is used to extract temporal features from video frame sequences; and (3) the comparison between this proposed new approach and a similar CEAR method and two of the best-performing HAR approaches, namely, three-dimensional (3D) convolutional networks (ConvNets) and two-stream ConvNets, to evaluate the performance of STCN and investigate the possibility of directly transferring HAR approaches to the field of CEAR.


2018 ◽  
Vol 6 (10) ◽  
pp. 323-328
Author(s):  
K.Kiruba . ◽  
D. Shiloah Elizabeth ◽  
C Sunil Retmin Raj

Author(s):  
Gopika Rajendran ◽  
Ojus Thomas Lee ◽  
Arya Gopi ◽  
Jais jose ◽  
Neha Gautham

With the evolution of computing technology in many application like human robot interaction, human computer interaction and health-care system, 3D human body models and their dynamic motions has gained popularity. Human performance accompanies human body shapes and their relative motions. Research on human activity recognition is structured around how the complex movement of a human body is identified and analyzed. Vision based action recognition from video is such kind of tasks where actions are inferred by observing the complete set of action sequence performed by human. Many techniques have been revised over the recent decades in order to develop a robust as well as effective framework for action recognition. In this survey, we summarize recent advances in human action recognition, namely the machine learning approach, deep learning approach and evaluation of these approaches.


2017 ◽  
Vol 11 (8) ◽  
pp. 623-632 ◽  
Author(s):  
Maryam Koohzadi ◽  
Nasrollah Moghadam Charkari

Sign in / Sign up

Export Citation Format

Share Document