scholarly journals Feature Fusion of a Deep-Learning Algorithm into Wearable Sensor Devices for Human Activity Recognition

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8294
Author(s):  
Chih-Ta Yen ◽  
Jia-Xian Liao ◽  
Yi-Kai Huang

This paper presents a wearable device, fitted on the waist of a participant that recognizes six activities of daily living (walking, walking upstairs, walking downstairs, sitting, standing, and laying) through a deep-learning algorithm, human activity recognition (HAR). The wearable device comprises a single-board computer (SBC) and six-axis sensors. The deep-learning algorithm employs three parallel convolutional neural networks for local feature extraction and for subsequent concatenation to establish feature fusion models of varying kernel size. By using kernels of different sizes, relevant local features of varying lengths were identified, thereby increasing the accuracy of human activity recognition. Regarding experimental data, the database of University of California, Irvine (UCI) and self-recorded data were used separately. The self-recorded data were obtained by having 21 participants wear the device on their waist and perform six common activities in the laboratory. These data were used to verify the proposed deep-learning algorithm on the performance of the wearable device. The accuracy of these six activities in the UCI dataset and in the self-recorded data were 97.49% and 96.27%, respectively. The accuracies in tenfold cross-validation were 99.56% and 97.46%, respectively. The experimental results have successfully verified the proposed convolutional neural network (CNN) architecture, which can be used in rehabilitation assessment for people unable to exercise vigorously.

Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1556 ◽  
Author(s):  
Carlos Avilés-Cruz ◽  
Andrés Ferreyra-Ramírez ◽  
Arturo Zúñiga-López ◽  
Juan Villegas-Cortéz

In the last decade, deep learning techniques have further improved human activity recognition (HAR) performance on several benchmark datasets. This paper presents a novel framework to classify and analyze human activities. A new convolutional neural network (CNN) strategy is applied to a single user movement recognition using a smartphone. Three parallel CNNs are used for local feature extraction, and latter they are fused in the classification task stage. The whole CNN scheme is based on a feature fusion of a fine-CNN, a medium-CNN, and a coarse-CNN. A tri-axial accelerometer and a tri-axial gyroscope sensor embedded in a smartphone are used to record the acceleration and angle signals. Six human activities successfully classified are walking, walking-upstairs, walking-downstairs, sitting, standing and laying. Performance evaluation is presented for the proposed CNN.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 885 ◽  
Author(s):  
Zhongzheng Fu ◽  
Xinrun He ◽  
Enkai Wang ◽  
Jun Huo ◽  
Jian Huang ◽  
...  

Human activity recognition (HAR) based on the wearable device has attracted more attention from researchers with sensor technology development in recent years. However, personalized HAR requires high accuracy of recognition, while maintaining the model’s generalization capability is a major challenge in this field. This paper designed a compact wireless wearable sensor node, which combines an air pressure sensor and inertial measurement unit (IMU) to provide multi-modal information for HAR model training. To solve personalized recognition of user activities, we propose a new transfer learning algorithm, which is a joint probability domain adaptive method with improved pseudo-labels (IPL-JPDA). This method adds the improved pseudo-label strategy to the JPDA algorithm to avoid cumulative errors due to inaccurate initial pseudo-labels. In order to verify our equipment and method, we use the newly designed sensor node to collect seven daily activities of 7 subjects. Nine different HAR models are trained by traditional machine learning and transfer learning methods. The experimental results show that the multi-modal data improve the accuracy of the HAR system. The IPL-JPDA algorithm proposed in this paper has the best performance among five HAR models, and the average recognition accuracy of different subjects is 93.2%.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2760
Author(s):  
Seungmin Oh ◽  
Akm Ashiquzzaman ◽  
Dongsu Lee ◽  
Yeonggwang Kim ◽  
Jinsul Kim

In recent years, various studies have begun to use deep learning models to conduct research in the field of human activity recognition (HAR). However, there has been a severe lag in the absolute development of such models since training deep learning models require a lot of labeled data. In fields such as HAR, it is difficult to collect data and there are high costs and efforts involved in manual labeling. The existing methods rely heavily on manual data collection and proper labeling of the data, which is done by human administrators. This often results in the data gathering process often being slow and prone to human-biased labeling. To address these problems, we proposed a new solution for the existing data gathering methods by reducing the labeling tasks conducted on new data based by using the data learned through the semi-supervised active transfer learning method. This method achieved 95.9% performance while also reducing labeling compared to the random sampling or active transfer learning methods.


Sign in / Sign up

Export Citation Format

Share Document