SenseCollect

Author(s):  
Wenqiang Chen ◽  
Shupei Lin ◽  
Elizabeth Thompson ◽  
John Stankovic

On-body sensor-based human activity recognition (HAR) lags behind other fields because it lacks large-scale, labeled datasets; this shortfall impedes progress in developing robust and generalized predictive models. To facilitate researchers in collecting more extensive datasets quickly and efficiently we developed SenseCollect. We did a survey and interviewed student researchers in this area to identify what barriers are making it difficult to collect on-body sensor-based HAR data from human subjects. Every interviewee identified data collection as the hardest part of their research, stating it was laborious, consuming and error-prone. To improve HAR data resources we need to address that barrier, but we need a better understanding of the complicating factors to overcome it. To that end we conducted a series of control variable experiments that tested several protocols to ascertain their impact on data collection. SenseCollect studied 240+ human subjects in total and presented the findings to develop a data collection guideline. We also implemented a system to collect data, created the two largest on-body sensor-based human activity datasets, and made them publicly available.

Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3434 ◽  
Author(s):  
Nattaya Mairittha ◽  
Tittaya Mairittha ◽  
Sozo Inoue

Labeling activity data is a central part of the design and evaluation of human activity recognition systems. The performance of the systems greatly depends on the quantity and “quality” of annotations; therefore, it is inevitable to rely on users and to keep them motivated to provide activity labels. While mobile and embedded devices are increasingly using deep learning models to infer user context, we propose to exploit on-device deep learning inference using a long short-term memory (LSTM)-based method to alleviate the labeling effort and ground truth data collection in activity recognition systems using smartphone sensors. The novel idea behind this is that estimated activities are used as feedback for motivating users to collect accurate activity labels. To enable us to perform evaluations, we conduct the experiments with two conditional methods. We compare the proposed method showing estimated activities using on-device deep learning inference with the traditional method showing sentences without estimated activities through smartphone notifications. By evaluating with the dataset gathered, the results show our proposed method has improvements in both data quality (i.e., the performance of a classification model) and data quantity (i.e., the number of data collected) that reflect our method could improve activity data collection, which can enhance human activity recognition systems. We discuss the results, limitations, challenges, and implications for on-device deep learning inference that support activity data collection. Also, we publish the preliminary dataset collected to the research community for activity recognition.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8337
Author(s):  
Hyeokhyen Kwon ◽  
Gregory D. Abowd ◽  
Thomas Plötz

Supervised training of human activity recognition (HAR) systems based on body-worn inertial measurement units (IMUs) is often constrained by the typically rather small amounts of labeled sample data. Systems like IMUTube have been introduced that employ cross-modality transfer approaches to convert videos of activities of interest into virtual IMU data. We demonstrate for the first time how such large-scale virtual IMU datasets can be used to train HAR systems that are substantially more complex than the state-of-the-art. Complexity is thereby represented by the number of model parameters that can be trained robustly. Our models contain components that are dedicated to capture the essentials of IMU data as they are of relevance for activity recognition, which increased the number of trainable parameters by a factor of 1100 compared to state-of-the-art model architectures. We evaluate the new model architecture on the challenging task of analyzing free-weight gym exercises, specifically on classifying 13 dumbbell execises. We have collected around 41 h of virtual IMU data using IMUTube from exercise videos available from YouTube. The proposed model is trained with the large amount of virtual IMU data and calibrated with a mere 36 min of real IMU data. The trained model was evaluated on a real IMU dataset and we demonstrate the substantial performance improvements of 20% absolute F1 score compared to the state-of-the-art convolutional models in HAR.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5206
Author(s):  
Enida Cero Dinarević ◽  
Jasmina Baraković Husić ◽  
Sabina Baraković

Human activity recognition (HAR) is a classification process that is used for recognizing human motions. A comprehensive review of currently considered approaches in each stage of HAR, as well as the influence of each HAR stage on energy consumption and latency is presented in this paper. It highlights various methods for the optimization of energy consumption and latency in each stage of HAR that has been used in literature and was analyzed in order to provide direction for the implementation of HAR in health and wellbeing applications. This paper analyses if and how each stage of the HAR process affects energy consumption and latency. It shows that data collection and filtering and data segmentation and classification stand out as key stages in achieving a balance between energy consumption and latency. Since latency is only critical for real-time HAR applications, the energy consumption of sensors and devices stands out as a key challenge for HAR implementation in health and wellbeing applications. Most of the approaches in overcoming challenges related to HAR implementation take place in the data collection, filtering and classification stages, while the data segmentation stage needs further exploration. Finally, this paper recommends a balance between energy consumption and latency for HAR in health and wellbeing applications, which takes into account the context and health of the target population.


IoT ◽  
2020 ◽  
Vol 1 (2) ◽  
pp. 451-473
Author(s):  
Liliana I. Carvalho ◽  
Rute C. Sofia

Mobile sensing has been gaining ground due to the increasing capabilities of mobile and personal devices that are carried around by citizens, giving access to a large variety of data and services based on the way humans interact. Mobile sensing brings several advantages in terms of the richness of available data, particularly for human activity recognition. Nevertheless, the infrastructure required to support large-scale mobile sensing requires an interoperable design, which is still hard to achieve today. This review paper contributes to raising awareness of challenges faced today by mobile sensing platforms that perform learning and behavior inference with respect to human routines: how current solutions perform activity recognition, which classification models they consider, and which types of behavior inferences can be seamlessly provided. The paper provides a set of guidelines that contribute to a better functional design of mobile sensing infrastructures, keeping scalability as well as interoperability in mind.


2021 ◽  
Vol 25 (2) ◽  
pp. 38-42
Author(s):  
Hyeokhyen Kwon ◽  
Catherine Tong ◽  
Harish Haresamudram ◽  
Yan Gao ◽  
Gregory D. Abowd ◽  
...  

Today's smartphones and wearable devices come equipped with an array of inertial sensors, along with IMU-based Human Activity Recognition models to monitor everyday activities. However, such models rely on large amounts of annotated training data, which require considerable time and effort for collection. One has to recruit human subjects, define clear protocols for the subjects to follow, and manually annotate the collected data, along with the administrative work that goes into organizing such a recording.


2021 ◽  
Author(s):  
Mehdi Ejtehadi ◽  
Amin M. Nasrabadi ◽  
Saeed Behzadipour

Abstract Background: The advent of Inertial measurement unit (IMU) sensors has significantly extended the application domain of Human Activity Recognition (HAR) systems to healthcare, tele-rehabilitation & daily life monitoring. IMU’s are categorized as body-worn sensors and therefore their output signals and the HAR performance naturally depends on their exact location on the body segments. Objectives: This research aims to introduce a methodology to investigate the effects of misplacing the sensors on the performance of the HAR systems. Methods: The properly placed sensors and their misplaced variations were modeled on a human body kinematic model. The model was then actuated using measured motions from human subjects. The model was then used to run a sensitivity analysis. Results: The results indicated that the transverse misplacement of the sensors on the left arm and right thigh and the rotation of the left thigh sensor significantly decrease the rate of activity recognition. It was also shown that the longitudinal displacements of the sensors (along the body segments) have minor impacts on the HAR performance. A Monte Carlo simulation indicated that if the sensitive sensors are mounted with extra care, the performance can be maintained at a higher than 95% level.Conclusions: Accurate mounting of the IMU’s on the body impacts the performance of the HAR. Particularly, the transverse position and rotation of the IMU’s are more sensitive. The users of such systems need to be informed about the more sensitive sensors and directions to maintain an acceptable performance for the HAR.


Proceedings ◽  
2018 ◽  
Vol 2 (19) ◽  
pp. 1242 ◽  
Author(s):  
Macarena Espinilla ◽  
Javier Medina ◽  
Alberto Salguero ◽  
Naomi Irvine ◽  
Mark Donnelly ◽  
...  

Data driven approaches for human activity recognition learn from pre-existent large-scale datasets to generate a classification algorithm that can recognize target activities. Typically, several activities are represented within such datasets, characterized by multiple features that are computed from sensor devices. Often, some features are found to be more relevant to particular activities, which can lead to the classification algorithm providing less accuracy in detecting the activity where such features are not so relevant. This work presents an experimentation for human activity recognition with features derived from the acceleration data of a wearable device. Specifically, this work analyzes which features are most relevant for each activity and furthermore investigates which classifier provides the best accuracy with those features. The results obtained indicate that the best classifier is the k-nearest neighbor and furthermore, confirms that there do exist redundant features that generally introduce noise into the classification, leading to decreased accuracy.


Sign in / Sign up

Export Citation Format

Share Document