scholarly journals A Review on Scaling Mobile Sensing Platforms for Human Activity Recognition: Challenges and Recommendations for Future Research

IoT ◽  
2020 ◽  
Vol 1 (2) ◽  
pp. 451-473
Author(s):  
Liliana I. Carvalho ◽  
Rute C. Sofia

Mobile sensing has been gaining ground due to the increasing capabilities of mobile and personal devices that are carried around by citizens, giving access to a large variety of data and services based on the way humans interact. Mobile sensing brings several advantages in terms of the richness of available data, particularly for human activity recognition. Nevertheless, the infrastructure required to support large-scale mobile sensing requires an interoperable design, which is still hard to achieve today. This review paper contributes to raising awareness of challenges faced today by mobile sensing platforms that perform learning and behavior inference with respect to human routines: how current solutions perform activity recognition, which classification models they consider, and which types of behavior inferences can be seamlessly provided. The paper provides a set of guidelines that contribute to a better functional design of mobile sensing infrastructures, keeping scalability as well as interoperability in mind.

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8337
Author(s):  
Hyeokhyen Kwon ◽  
Gregory D. Abowd ◽  
Thomas Plötz

Supervised training of human activity recognition (HAR) systems based on body-worn inertial measurement units (IMUs) is often constrained by the typically rather small amounts of labeled sample data. Systems like IMUTube have been introduced that employ cross-modality transfer approaches to convert videos of activities of interest into virtual IMU data. We demonstrate for the first time how such large-scale virtual IMU datasets can be used to train HAR systems that are substantially more complex than the state-of-the-art. Complexity is thereby represented by the number of model parameters that can be trained robustly. Our models contain components that are dedicated to capture the essentials of IMU data as they are of relevance for activity recognition, which increased the number of trainable parameters by a factor of 1100 compared to state-of-the-art model architectures. We evaluate the new model architecture on the challenging task of analyzing free-weight gym exercises, specifically on classifying 13 dumbbell execises. We have collected around 41 h of virtual IMU data using IMUTube from exercise videos available from YouTube. The proposed model is trained with the large amount of virtual IMU data and calibrated with a mere 36 min of real IMU data. The trained model was evaluated on a real IMU dataset and we demonstrate the substantial performance improvements of 20% absolute F1 score compared to the state-of-the-art convolutional models in HAR.


2021 ◽  
Vol 11 (5) ◽  
pp. 2188
Author(s):  
Athanasios Anagnostis ◽  
Lefteris Benos ◽  
Dimitrios Tsaopoulos ◽  
Aristotelis Tagarakis ◽  
Naoum Tsolakis ◽  
...  

The present study deals with human awareness, which is a very important aspect of human–robot interaction. This feature is particularly essential in agricultural environments, owing to the information-rich setup that they provide. The objective of this investigation was to recognize human activities associated with an envisioned synergistic task. In order to attain this goal, a data collection field experiment was designed that derived data from twenty healthy participants using five wearable sensors (embedded with tri-axial accelerometers, gyroscopes, and magnetometers) attached to them. The above task involved several sub-activities, which were carried out by agricultural workers in real field conditions, concerning load lifting and carrying. Subsequently, the obtained signals from on-body sensors were processed for noise-removal purposes and fed into a Long Short-Term Memory neural network, which is widely used in deep learning for feature recognition in time-dependent data sequences. The proposed methodology demonstrated considerable efficacy in predicting the defined sub-activities with an average accuracy of 85.6%. Moreover, the trained model properly classified the defined sub-activities in a range of 74.1–90.4% for precision and 71.0–96.9% for recall. It can be inferred that the combination of all sensors can achieve the highest accuracy in human activity recognition, as concluded from a comparative analysis for each sensor’s impact on the model’s performance. These results confirm the applicability of the proposed methodology for human awareness purposes in agricultural environments, while the dataset was made publicly available for future research.


Human activity recognition(HAR) is used to describe basic activities that humans are performing using the sensors that we have in smartphones. The data for this activity recognition is captured by various sensors of mobile phones or wristbands such as accelerometer, gyroscope and gravity sensors.HAR has grabbed the attention of various researchers due to its vast demand in the fields of sport training, security, entertainment health monitoring,computer vision and robotics. In this project we compare different machine learning and deep learning algorithms to find a better approach for HAR. The dataset comprises six activities i.e. walking, sleeping, sitting,moving upward, moving downwards and standing.In this demonstration we also showed confusion matrix,accuracy and multi log loss of various algorithms. With the help of accuracy, confusion matrix of algorithms we compare and determine the best approach for HAR. This will help in future research to map the activities of humans using one of the best approaches used


Proceedings ◽  
2018 ◽  
Vol 2 (19) ◽  
pp. 1242 ◽  
Author(s):  
Macarena Espinilla ◽  
Javier Medina ◽  
Alberto Salguero ◽  
Naomi Irvine ◽  
Mark Donnelly ◽  
...  

Data driven approaches for human activity recognition learn from pre-existent large-scale datasets to generate a classification algorithm that can recognize target activities. Typically, several activities are represented within such datasets, characterized by multiple features that are computed from sensor devices. Often, some features are found to be more relevant to particular activities, which can lead to the classification algorithm providing less accuracy in detecting the activity where such features are not so relevant. This work presents an experimentation for human activity recognition with features derived from the acceleration data of a wearable device. Specifically, this work analyzes which features are most relevant for each activity and furthermore investigates which classifier provides the best accuracy with those features. The results obtained indicate that the best classifier is the k-nearest neighbor and furthermore, confirms that there do exist redundant features that generally introduce noise into the classification, leading to decreased accuracy.


Author(s):  
Anna Ferrari ◽  
Daniela Micucci ◽  
Marco Mobilio ◽  
Paolo Napoletano

AbstractRecognizing human activities and monitoring population behavior are fundamental needs of our society. Population security, crowd surveillance, healthcare support and living assistance, and lifestyle and behavior tracking are some of the main applications that require the recognition of human activities. Over the past few decades, researchers have investigated techniques that can automatically recognize human activities. This line of research is commonly known as Human Activity Recognition (HAR). HAR involves many tasks: from signals acquisition to activity classification. The tasks involved are not simple and often require dedicated hardware, sophisticated engineering, and computational and statistical techniques for data preprocessing and analysis. Over the years, different techniques have been tested and different solutions have been proposed to achieve a classification process that provides reliable results. This survey presents the most recent solutions proposed for each task in the human activity classification process, that is, acquisition, preprocessing, data segmentation, feature extraction, and classification. Solutions are analyzed by emphasizing their strengths and weaknesses. For completeness, the survey also presents the metrics commonly used to evaluate the goodness of a classifier and the datasets of inertial signals from smartphones that are mostly used in the evaluation phase.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2654
Author(s):  
Xue Ding ◽  
Ting Jiang ◽  
Yi Zhong ◽  
Yan Huang ◽  
Zhiwei Li

Wi-Fi-based device-free human activity recognition has recently become a vital underpinning for various emerging applications, ranging from the Internet of Things (IoT) to Human–Computer Interaction (HCI). Although this technology has been successfully demonstrated for location-dependent sensing, it relies on sufficient data samples for large-scale sensing, which is enormously labor-intensive and time-consuming. However, in real-world applications, location-independent sensing is crucial and indispensable. Therefore, how to alleviate adverse effects on recognition accuracy caused by location variations with the limited dataset is still an open question. To address this concern, we present a location-independent human activity recognition system based on Wi-Fi named WiLiMetaSensing. Specifically, we first leverage a Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM) feature representation method to focus on location-independent characteristics. Then, in order to well transfer the model across different positions with limited data samples, a metric learning-based activity recognition method is proposed. Consequently, not only the generalization ability but also the transferable capability of the model would be significantly promoted. To fully validate the feasibility of the presented approach, extensive experiments have been conducted in an office with 24 testing locations. The evaluation results demonstrate that our method can achieve more than 90% in location-independent human activity recognition accuracy. More importantly, it can adapt well to the data samples with a small number of subcarriers and a low sampling rate.


Information ◽  
2020 ◽  
Vol 12 (1) ◽  
pp. 6
Author(s):  
Sujan Ray ◽  
Khaldoon Alshouiliy ◽  
Dharma P. Agrawal

Human activity recognition (HAR) is a classification task that involves predicting the movement of a person based on sensor data. As we can see, there has been a huge growth and development of smartphones over the last 10–15 years—they could be used as a medium of mobile sensing to recognize human activity. Nowadays, deep learning methods are in a great demand and we could use those methods to recognize human activity. A great way is to build a convolutional neural network (CNN). HAR using Smartphone dataset has been widely used by researchers to develop machine learning models to recognize human activity. The dataset has two parts: training and testing. In this paper, we propose a hybrid approach to analyze and recognize human activity on the same dataset using deep learning method on cloud-based platform. We have applied principal component analysis on the dataset to get the most important features. Next, we have executed the experiment for all the features as well as the top 48, 92, 138, and 164 features. We have run all the experiments on Google Colab. In the experiment, for the evaluation of our proposed methodology, datasets are split into two different ratios such as 70–10–20% and 80–10–10% for training, validation, and testing, respectively. We have set the performance of CNN (70% training–10% validation–20% testing) with 48 features as a benchmark for our work. In this work, we have achieved maximum accuracy of 98.70% with CNN. On the other hand, we have obtained 96.36% accuracy with the top 92 features of the dataset. We can see from the experimental results that if we could select the features properly then not only could the accuracy be improved but also the training and testing time of the model.


Author(s):  
Wenqiang Chen ◽  
Shupei Lin ◽  
Elizabeth Thompson ◽  
John Stankovic

On-body sensor-based human activity recognition (HAR) lags behind other fields because it lacks large-scale, labeled datasets; this shortfall impedes progress in developing robust and generalized predictive models. To facilitate researchers in collecting more extensive datasets quickly and efficiently we developed SenseCollect. We did a survey and interviewed student researchers in this area to identify what barriers are making it difficult to collect on-body sensor-based HAR data from human subjects. Every interviewee identified data collection as the hardest part of their research, stating it was laborious, consuming and error-prone. To improve HAR data resources we need to address that barrier, but we need a better understanding of the complicating factors to overcome it. To that end we conducted a series of control variable experiments that tested several protocols to ascertain their impact on data collection. SenseCollect studied 240+ human subjects in total and presented the findings to develop a data collection guideline. We also implemented a system to collect data, created the two largest on-body sensor-based human activity datasets, and made them publicly available.


Sign in / Sign up

Export Citation Format

Share Document