scholarly journals A Survey of Vision-Based Transfer Learning in Human Activity Recognition

Electronics ◽  
2021 ◽  
Vol 10 (19) ◽  
pp. 2412
Author(s):  
David Ada Adama ◽  
Ahmad Lotfi ◽  
Robert Ranson

Human activity recognition (HAR) and transfer learning (TL) are two broad areas widely studied in computational intelligence (CI) and artificial intelligence (AI) applications. Much effort has been put into developing suitable solutions to advance the current performance of existing systems. However, challenges are facing the existing methods of HAR. In HAR, the variations in data required in HAR systems pose challenges to many existing solutions. The type of sensory information used could play an important role in overcoming some of these challenges. Vision-based information in 3D acquired using RGB-D cameras is one type. Furthermore, with the successes encountered in TL, HAR stands to benefit from TL to address challenges to existing methods. Therefore, it is important to review the current state-of-the-art related to both areas. This paper presents a comprehensive survey of vision-based HAR using different methods with a focus on the incorporation of TL in HAR methods. It also discusses the limitations, challenges and possible future directions for more research.

2022 ◽  
Vol 54 (8) ◽  
pp. 1-34
Author(s):  
Fuqiang Gu ◽  
Mu-Huan Chung ◽  
Mark Chignell ◽  
Shahrokh Valaee ◽  
Baoding Zhou ◽  
...  

Human activity recognition is a key to a lot of applications such as healthcare and smart home. In this study, we provide a comprehensive survey on recent advances and challenges in human activity recognition (HAR) with deep learning. Although there are many surveys on HAR, they focused mainly on the taxonomy of HAR and reviewed the state-of-the-art HAR systems implemented with conventional machine learning methods. Recently, several works have also been done on reviewing studies that use deep models for HAR, whereas these works cover few deep models and their variants. There is still a need for a comprehensive and in-depth survey on HAR with recently developed deep learning methods.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 885 ◽  
Author(s):  
Zhongzheng Fu ◽  
Xinrun He ◽  
Enkai Wang ◽  
Jun Huo ◽  
Jian Huang ◽  
...  

Human activity recognition (HAR) based on the wearable device has attracted more attention from researchers with sensor technology development in recent years. However, personalized HAR requires high accuracy of recognition, while maintaining the model’s generalization capability is a major challenge in this field. This paper designed a compact wireless wearable sensor node, which combines an air pressure sensor and inertial measurement unit (IMU) to provide multi-modal information for HAR model training. To solve personalized recognition of user activities, we propose a new transfer learning algorithm, which is a joint probability domain adaptive method with improved pseudo-labels (IPL-JPDA). This method adds the improved pseudo-label strategy to the JPDA algorithm to avoid cumulative errors due to inaccurate initial pseudo-labels. In order to verify our equipment and method, we use the newly designed sensor node to collect seven daily activities of 7 subjects. Nine different HAR models are trained by traditional machine learning and transfer learning methods. The experimental results show that the multi-modal data improve the accuracy of the HAR system. The IPL-JPDA algorithm proposed in this paper has the best performance among five HAR models, and the average recognition accuracy of different subjects is 93.2%.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2760
Author(s):  
Seungmin Oh ◽  
Akm Ashiquzzaman ◽  
Dongsu Lee ◽  
Yeonggwang Kim ◽  
Jinsul Kim

In recent years, various studies have begun to use deep learning models to conduct research in the field of human activity recognition (HAR). However, there has been a severe lag in the absolute development of such models since training deep learning models require a lot of labeled data. In fields such as HAR, it is difficult to collect data and there are high costs and efforts involved in manual labeling. The existing methods rely heavily on manual data collection and proper labeling of the data, which is done by human administrators. This often results in the data gathering process often being slow and prone to human-biased labeling. To address these problems, we proposed a new solution for the existing data gathering methods by reducing the labeling tasks conducted on new data based by using the data learned through the semi-supervised active transfer learning method. This method achieved 95.9% performance while also reducing labeling compared to the random sampling or active transfer learning methods.


2021 ◽  
pp. 163-177
Author(s):  
Michael Kirchhof ◽  
Lena Schmid ◽  
Christopher Reining ◽  
Michael ten Hompel ◽  
Markus Pauly

Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1888
Author(s):  
Malek Boujebli ◽  
Hassen Drira ◽  
Makram Mestiri ◽  
Imed Riadh Farah

Human activity recognition is one of the most challenging and active areas of research in the computer vision domain. However, designing automatic systems that are robust to significant variability due to object combinations and the high complexity of human motions are more challenging. In this paper, we propose to model the inter-frame rigid evolution of skeleton parts as the trajectory in the Lie group SE(3)×…×SE(3). The motion of the object is similarly modeled as an additional trajectory in the same manifold. The classification is performed based on a rate-invariant comparison of the resulting trajectories mapped to a vector space, the Lie algebra. Experimental results on three action and activity datasets show that the proposed method outperforms various state-of-the-art human activity recognition approaches.


Biosensors ◽  
2018 ◽  
Vol 8 (3) ◽  
pp. 60 ◽  
Author(s):  
Stavros Ntalampiras ◽  
Ilyas Potamitis

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8337
Author(s):  
Hyeokhyen Kwon ◽  
Gregory D. Abowd ◽  
Thomas Plötz

Supervised training of human activity recognition (HAR) systems based on body-worn inertial measurement units (IMUs) is often constrained by the typically rather small amounts of labeled sample data. Systems like IMUTube have been introduced that employ cross-modality transfer approaches to convert videos of activities of interest into virtual IMU data. We demonstrate for the first time how such large-scale virtual IMU datasets can be used to train HAR systems that are substantially more complex than the state-of-the-art. Complexity is thereby represented by the number of model parameters that can be trained robustly. Our models contain components that are dedicated to capture the essentials of IMU data as they are of relevance for activity recognition, which increased the number of trainable parameters by a factor of 1100 compared to state-of-the-art model architectures. We evaluate the new model architecture on the challenging task of analyzing free-weight gym exercises, specifically on classifying 13 dumbbell execises. We have collected around 41 h of virtual IMU data using IMUTube from exercise videos available from YouTube. The proposed model is trained with the large amount of virtual IMU data and calibrated with a mere 36 min of real IMU data. The trained model was evaluated on a real IMU dataset and we demonstrate the substantial performance improvements of 20% absolute F1 score compared to the state-of-the-art convolutional models in HAR.


Sign in / Sign up

Export Citation Format

Share Document