scholarly journals Human Daily Activity Recognition Performed Using Wearable Inertial Sensors Combined With Deep Learning Algorithms

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 174105-174114
Author(s):  
Chih-Ta Yen ◽  
Jia-Xian Liao ◽  
Yi-Kai Huang
Author(s):  
Chih-Ta Yen ◽  
Jia-De Lin

This study employed wearable inertial sensors integrated with an activity-recognition algorithm to recognize six types of daily activities performed by humans, namely walking, ascending stairs, descending stairs, sitting, standing, and lying. The sensor system consisted of a microcontroller, a three-axis accelerometer, and a three-axis gyro; the algorithm involved collecting and normalizing the activity signals. To simplify the calculation process and to maximize the recognition accuracy, the data were preprocessed through linear discriminant analysis; this reduced their dimensionality and captured their features, thereby reducing the feature space of the accelerometer and gyro signals; they were then verified through the use of six classification algorithms. The new contribution is that after feature extraction, data classification results indicated that an artificial neural network was the most stable and effective of the six algorithms. In the experiment, 20 participants equipped the wearable sensors on their waists to record the aforementioned six types of daily activities and to verify the effectiveness of the sensors. According to the cross-validation results, the combination of linear discriminant analysis and an artificial neural network was the most stable classification algorithm for data generalization; its activity-recognition accuracy was 87.37% on the training data and 80.96% on the test data.


2020 ◽  
Vol 2 (1) ◽  
pp. 22
Author(s):  
Manuel Gil-Martín ◽  
José Antúnez-Durango ◽  
Rubén San-Segundo

Deep learning techniques have been widely applied to Human Activity Recognition (HAR), but a specific challenge appears when HAR systems are trained and tested with different subjects. This paper describes and evaluates several techniques based on deep learning algorithms for adapting and selecting the training data used to generate a HAR system using accelerometer recordings. This paper proposes two alternatives: autoencoders and Generative Adversarial Networks (GANs). Both alternatives are based on deep neural networks including convolutional layers for feature extraction and fully-connected layers for classification. Fast Fourier Transform (FFT) is used as a transformation of acceleration data to provide an appropriate input data to the deep neural network. This study has used acceleration recordings from hand, chest and ankle sensors included in the Physical Activity Monitoring Data Set (PAMAP2) dataset. This is a public dataset including recordings from nine subjects while performing 12 activities such as walking, running, sitting, ascending stairs, or ironing. The evaluation has been performed using a Leave-One-Subject-Out (LOSO) cross-validation: all recordings from a subject are used as testing subset and recordings from the rest of the subjects are used as training subset. The obtained results suggest that strategies using autoencoders to adapt training data to testing data improve some users’ performance. Moreover, training data selection algorithms with autoencoders provide significant improvements. The GAN approach, using the generator or discriminator module, also provides improvement in selection experiments.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 635
Author(s):  
Yong Li ◽  
Luping Wang

Due to the wide application of human activity recognition (HAR) in sports and health, a large number of HAR models based on deep learning have been proposed. However, many existing models ignore the effective extraction of spatial and temporal features of human activity data. This paper proposes a deep learning model based on residual block and bi-directional LSTM (BiLSTM). The model first extracts spatial features of multidimensional signals of MEMS inertial sensors automatically using the residual block, and then obtains the forward and backward dependencies of feature sequence using BiLSTM. Finally, the obtained features are fed into the Softmax layer to complete the human activity recognition. The optimal parameters of the model are obtained by experiments. A homemade dataset containing six common human activities of sitting, standing, walking, running, going upstairs and going downstairs is developed. The proposed model is evaluated on our dataset and two public datasets, WISDM and PAMAP2. The experimental results show that the proposed model achieves the accuracy of 96.95%, 97.32% and 97.15% on our dataset, WISDM and PAMAP2, respectively. Compared with some existing models, the proposed model has better performance and fewer parameters.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4756
Author(s):  
Irvin Hussein Lopez-Nava ◽  
Luis M. Valentín-Coronado ◽  
Matias Garcia-Constantino ◽  
Jesus Favela

Activity recognition is one of the most active areas of research in ubiquitous computing. In particular, gait activity recognition is useful to identify various risk factors in people’s health that are directly related to their physical activity. One of the issues in activity recognition, and gait in particular, is that often datasets are unbalanced (i.e., the distribution of classes is not uniform), and due to this disparity, the models tend to categorize into the class with more instances. In the present study, two methods for classifying gait activities using accelerometer and gyroscope data from a large-scale public dataset were evaluated and compared. The gait activities in this dataset are: (i) going down an incline, (ii) going up an incline, (iii) walking on level ground, (iv) going down stairs, and (v) going up stairs. The proposed methods are based on conventional (shallow) and deep learning techniques. In addition, data were evaluated from three data treatments: original unbalanced data, sampled data, and augmented data. The latter was based on the generation of synthetic data according to segmented gait data. The best results were obtained with classifiers built with augmented data, with F-measure results of 0.812 (σ = 0.078) for the shallow learning approach, and of 0.927 (σ = 0.033) for the deep learning approach. In addition, the data augmentation strategy proposed to deal with the unbalanced problem resulted in increased classification performance using both techniques.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1669
Author(s):  
Philip Boyer ◽  
David Burns ◽  
Cari Whyne

Out-of-distribution (OOD) in the context of Human Activity Recognition (HAR) refers to data from activity classes that are not represented in the training data of a Machine Learning (ML) algorithm. OOD data are a challenge to classify accurately for most ML algorithms, especially deep learning models that are prone to overconfident predictions based on in-distribution (IIN) classes. To simulate the OOD problem in physiotherapy, our team collected a new dataset (SPARS9x) consisting of inertial data captured by smartwatches worn by 20 healthy subjects as they performed supervised physiotherapy exercises (IIN), followed by a minimum 3 h of data captured for each subject as they engaged in unrelated and unstructured activities (OOD). In this paper, we experiment with three traditional algorithms for OOD-detection using engineered statistical features, deep learning-generated features, and several popular deep learning approaches on SPARS9x and two other publicly-available human activity datasets (MHEALTH and SPARS). We demonstrate that, while deep learning algorithms perform better than simple traditional algorithms such as KNN with engineered features for in-distribution classification, traditional algorithms outperform deep learning approaches for OOD detection for these HAR time series datasets.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5356 ◽  
Author(s):  
Ganapati Bhat ◽  
Nicholas Tran ◽  
Holly Shill ◽  
Umit Y. Ogras

Human activity recognition (HAR) is growing in popularity due to its wide-ranging applications in patient rehabilitation and movement disorders. HAR approaches typically start with collecting sensor data for the activities under consideration and then develop algorithms using the dataset. As such, the success of algorithms for HAR depends on the availability and quality of datasets. Most of the existing work on HAR uses data from inertial sensors on wearable devices or smartphones to design HAR algorithms. However, inertial sensors exhibit high noise that makes it difficult to segment the data and classify the activities. Furthermore, existing approaches typically do not make their data available publicly, which makes it difficult or impossible to obtain comparisons of HAR approaches. To address these issues, we present wearable HAR (w-HAR) which contains labeled data of seven activities from 22 users. Our dataset’s unique aspect is the integration of data from inertial and wearable stretch sensors, thus providing two modalities of activity information. The wearable stretch sensor data allows us to create variable-length segment data and ensure that each segment contains a single activity. We also provide a HAR framework to use w-HAR to classify the activities. To this end, we first perform a design space exploration to choose a neural network architecture for activity classification. Then, we use two online learning algorithms to adapt the classifier to users whose data are not included at design time. Experiments on the w-HAR dataset show that our framework achieves 95% accuracy while the online learning algorithms improve the accuracy by as much as 40%.


Sign in / Sign up

Export Citation Format

Share Document