Can You See It?

2021 ◽  
Vol 25 (2) ◽  
pp. 38-42
Author(s):  
Hyeokhyen Kwon ◽  
Catherine Tong ◽  
Harish Haresamudram ◽  
Yan Gao ◽  
Gregory D. Abowd ◽  
...  

Today's smartphones and wearable devices come equipped with an array of inertial sensors, along with IMU-based Human Activity Recognition models to monitor everyday activities. However, such models rely on large amounts of annotated training data, which require considerable time and effort for collection. One has to recruit human subjects, define clear protocols for the subjects to follow, and manually annotate the collected data, along with the administrative work that goes into organizing such a recording.

2020 ◽  
Vol 2 (1) ◽  
pp. 22
Author(s):  
Manuel Gil-Martín ◽  
José Antúnez-Durango ◽  
Rubén San-Segundo

Deep learning techniques have been widely applied to Human Activity Recognition (HAR), but a specific challenge appears when HAR systems are trained and tested with different subjects. This paper describes and evaluates several techniques based on deep learning algorithms for adapting and selecting the training data used to generate a HAR system using accelerometer recordings. This paper proposes two alternatives: autoencoders and Generative Adversarial Networks (GANs). Both alternatives are based on deep neural networks including convolutional layers for feature extraction and fully-connected layers for classification. Fast Fourier Transform (FFT) is used as a transformation of acceleration data to provide an appropriate input data to the deep neural network. This study has used acceleration recordings from hand, chest and ankle sensors included in the Physical Activity Monitoring Data Set (PAMAP2) dataset. This is a public dataset including recordings from nine subjects while performing 12 activities such as walking, running, sitting, ascending stairs, or ironing. The evaluation has been performed using a Leave-One-Subject-Out (LOSO) cross-validation: all recordings from a subject are used as testing subset and recordings from the rest of the subjects are used as training subset. The obtained results suggest that strategies using autoencoders to adapt training data to testing data improve some users’ performance. Moreover, training data selection algorithms with autoencoders provide significant improvements. The GAN approach, using the generator or discriminator module, also provides improvement in selection experiments.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1669
Author(s):  
Philip Boyer ◽  
David Burns ◽  
Cari Whyne

Out-of-distribution (OOD) in the context of Human Activity Recognition (HAR) refers to data from activity classes that are not represented in the training data of a Machine Learning (ML) algorithm. OOD data are a challenge to classify accurately for most ML algorithms, especially deep learning models that are prone to overconfident predictions based on in-distribution (IIN) classes. To simulate the OOD problem in physiotherapy, our team collected a new dataset (SPARS9x) consisting of inertial data captured by smartwatches worn by 20 healthy subjects as they performed supervised physiotherapy exercises (IIN), followed by a minimum 3 h of data captured for each subject as they engaged in unrelated and unstructured activities (OOD). In this paper, we experiment with three traditional algorithms for OOD-detection using engineered statistical features, deep learning-generated features, and several popular deep learning approaches on SPARS9x and two other publicly-available human activity datasets (MHEALTH and SPARS). We demonstrate that, while deep learning algorithms perform better than simple traditional algorithms such as KNN with engineered features for in-distribution classification, traditional algorithms outperform deep learning approaches for OOD detection for these HAR time series datasets.


Sign in / Sign up

Export Citation Format

Share Document