scholarly journals Stochastic Recognition of Physical Activity and Healthcare Using Tri-Axial Inertial Wearable Sensors

2020 ◽  
Vol 10 (20) ◽  
pp. 7122
Author(s):  
Ahmad Jalal ◽  
Mouazma Batool ◽  
Kibum Kim

The classification of human activity is becoming one of the most important areas of human health monitoring and physical fitness. With the use of physical activity recognition applications, people suffering from various diseases can be efficiently monitored and medical treatment can be administered in a timely fashion. These applications could improve remote services for health care monitoring and delivery. However, the fixed health monitoring devices provided in hospitals limits the subjects’ movement. In particular, our work reports on wearable sensors that provide remote monitoring that periodically checks human health through different postures and activities to give people timely and effective treatment. In this paper, we propose a novel human activity recognition (HAR) system with multiple combined features to monitor human physical movements from continuous sequences via tri-axial inertial sensors. The proposed HAR system filters 1D signals using a notch filter that examines the lower/upper cutoff frequencies to calculate the optimal wearable sensor data. Then, it calculates multiple combined features, i.e., statistical features, Mel Frequency Cepstral Coefficients, and Gaussian Mixture Model features. For the classification and recognition engine, a Decision Tree classifier optimized by the Binary Grey Wolf Optimization algorithm is proposed. The proposed system is applied and tested on three challenging benchmark datasets to assess the feasibility of the model. The experimental results show that our proposed system attained an exceptional level of performance compared to conventional solutions. We achieved accuracy rates of 88.25%, 93.95%, and 96.83% over MOTIONSENSE, MHEALTH, and the proposed self-annotated IM-AccGyro human-machine dataset, respectively.

2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Jian Sun ◽  
Yongling Fu ◽  
Shengguang Li ◽  
Jie He ◽  
Cheng Xu ◽  
...  

Human activity recognition (HAR) problems have traditionally been solved by using engineered features obtained by heuristic methods. These methods ignore the time information of the streaming sensor data and cannot achieve sequential human activity recognition. With the use of traditional statistical learning methods, results could easily plunge into the local minimum other than the global optimal and also face the problem of low efficiency. Therefore, we propose a hybrid deep framework based on convolution operations, LSTM recurrent units, and ELM classifier; the advantages are as follows: (1) does not require expert knowledge in extracting features; (2) models temporal dynamics of features; and (3) is more suitable to classify the extracted features and shortens the runtime. All of these unique advantages make it superior to other HAR algorithms. We evaluate our framework on OPPORTUNITY dataset which has been used in OPPORTUNITY challenge. Results show that our proposed method outperforms deep nonrecurrent networks by 6%, outperforming the previous reported best result by 8%. When compared with neural network using BP algorithm, testing time reduced by 38%.


2019 ◽  
Author(s):  
Jessica Sena ◽  
William Robson Schwartz

Sensor-based Human Activity Recognition (HAR) provides valuable knowledge to many areas. Recently, wearable devices have gained space as a relevant source of data. However, there are two issues: large number of heterogeneous sensors available and the temporal nature of the sensor data. To handle these issues, we propose a multimodal approach that processes each sensor separately and, through an ensemble of Deep Convolution Neural Networks (DCNN), extracts information from multiple temporal scales of the sensor data. In this ensemble, we use a convolutional kernel with a different height for each DCNN. Considering that the number of rows in the sensor data reflects the data captured over time, each kernel height reflects a temporal scale from which we can extract patterns. Consequently, our approach is able to extract information from simple movement patterns such as a wrist twist when picking up a spoon, to complex movements such as the human gait. This multimodal and multi-temporal approach outperforms previous state-of-the-art works in seven important datasets using two different protocols. In addition, we demonstrate that the use of our proposed set of kernels improves sensor-based HAR in another multi-kernel approach, the widely employed inception network.


2019 ◽  
Author(s):  
Ramin Ramezani ◽  
Wenhao Zhang ◽  
Zhuoer Xie ◽  
John Shen ◽  
David Elashoff ◽  
...  

BACKGROUND Health care, in recent years, has made great leaps in integrating wireless technology into traditional models of care. The availability of ubiquitous devices such as wearable sensors has enabled researchers to collect voluminous datasets and harness them in a wide range of health care topics. One of the goals of using on-body wearable sensors has been to study and analyze human activity and functional patterns, thereby predicting harmful outcomes such as falls. It can also be used to track precise individual movements to form personalized behavioral patterns, to standardize the concept of frailty, well-being/independence, etc. Most wearable devices such as activity trackers and smartwatches are equipped with low-cost embedded sensors that can provide users with health statistics. In addition to wearable devices, Bluetooth low-energy sensors known as BLE beacons have gained traction among researchers in ambient intelligence domain. The low cost and durability of newer versions have made BLE beacons feasible gadgets to yield indoor localization data, an adjunct feature in human activity recognition. In the studies by Moatamed et al and the patent application by Ramezani et al, we introduced a generic framework (Sensing At-Risk Population) that draws on the classification of human movements using a 3-axial accelerometer and extracting indoor localization using BLE beacons, in concert. OBJECTIVE The study aimed to examine the ability of combination of physical activity and indoor location features, extracted at baseline, on a cohort of 154 rehabilitation-dwelling patients to discriminate between subacute care patients who are re-admitted to the hospital versus the patients who are able to stay in a community setting. METHODS We analyzed physical activity sensor features to assess activity time and intensity. We also analyzed activities with regard to indoor localization. Chi-square and Kruskal-Wallis tests were used to compare demographic variables and sensor feature variables in outcome groups. Random forests were used to build predictive models based on the most significant features. RESULTS Standing time percentage (P<.001, d=1.51), laying down time percentage (P<.001, d=1.35), resident room energy intensity (P<.001, d=1.25), resident bed energy intensity (P<.001, d=1.23), and energy percentage of active state (P=.001, d=1.24) are the 5 most statistically significant features in distinguishing outcome groups at baseline. The energy intensity of the resident room (P<.001, d=1.25) was achieved by capturing indoor localization information. Random forests revealed that the energy intensity of the resident room, as a standalone attribute, is the most sensitive parameter in the identification of outcome groups (area under the curve=0.84). CONCLUSIONS This study demonstrates that a combination of indoor localization and physical activity tracking produces a series of features at baseline, a subset of which can better distinguish between at-risk patients that can gain independence versus the patients that are rehospitalized.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 194
Author(s):  
Sarah Gonzalez ◽  
Paul Stegall ◽  
Harvey Edwards ◽  
Leia Stirling ◽  
Ho Chit Siu

The field of human activity recognition (HAR) often utilizes wearable sensors and machine learning techniques in order to identify the actions of the subject. This paper considers the activity recognition of walking and running while using a support vector machine (SVM) that was trained on principal components derived from wearable sensor data. An ablation analysis is performed in order to select the subset of sensors that yield the highest classification accuracy. The paper also compares principal components across trials to inform the similarity of the trials. Five subjects were instructed to perform standing, walking, running, and sprinting on a self-paced treadmill, and the data were recorded while using surface electromyography sensors (sEMGs), inertial measurement units (IMUs), and force plates. When all of the sensors were included, the SVM had over 90% classification accuracy using only the first three principal components of the data with the classes of stand, walk, and run/sprint (combined run and sprint class). It was found that sensors that were placed only on the lower leg produce higher accuracies than sensors placed on the upper leg. There was a small decrease in accuracy when the force plates are ablated, but the difference may not be operationally relevant. Using only accelerometers without sEMGs was shown to decrease the accuracy of the SVM.


2021 ◽  
Vol 15 (6) ◽  
pp. 1-17
Author(s):  
Chenglin Li ◽  
Carrie Lu Tong ◽  
Di Niu ◽  
Bei Jiang ◽  
Xiao Zuo ◽  
...  

Deep learning models for human activity recognition (HAR) based on sensor data have been heavily studied recently. However, the generalization ability of deep models on complex real-world HAR data is limited by the availability of high-quality labeled activity data, which are hard to obtain. In this article, we design a similarity embedding neural network that maps input sensor signals onto real vectors through carefully designed convolutional and Long Short-Term Memory (LSTM) layers. The embedding network is trained with a pairwise similarity loss, encouraging the clustering of samples from the same class in the embedded real space, and can be effectively trained on a small dataset and even on a noisy dataset with mislabeled samples. Based on the learned embeddings, we further propose both nonparametric and parametric approaches for activity recognition. Extensive evaluation based on two public datasets has shown that the proposed similarity embedding network significantly outperforms state-of-the-art deep models on HAR classification tasks, is robust to mislabeled samples in the training set, and can also be used to effectively denoise a noisy dataset.


Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 111
Author(s):  
Pengjia Tu ◽  
Junhuai Li ◽  
Huaijun Wang ◽  
Ting Cao ◽  
Kan Wang

Human activity recognition (HAR) has vital applications in human–computer interaction, somatosensory games, and motion monitoring, etc. On the basis of the human motion accelerate sensor data, through a nonlinear analysis of the human motion time series, a novel method for HAR that is based on non-linear chaotic features is proposed in this paper. First, the C-C method and G-P algorithm are used to, respectively, compute the optimal delay time and embedding dimension. Additionally, a Reconstructed Phase Space (RPS) is formed while using time-delay embedding for the human accelerometer motion sensor data. Subsequently, a two-dimensional chaotic feature matrix is constructed, where the chaotic feature is composed of the correlation dimension and largest Lyapunov exponent (LLE) of attractor trajectory in the RPS. Next, the classification algorithms are used in order to classify and recognize the two different activity classes, i.e., basic and transitional activities. The experimental results show that the chaotic feature has a higher accuracy than traditional time and frequency domain features.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 692
Author(s):  
Jingcheng Chen ◽  
Yining Sun ◽  
Shaoming Sun

Human activity recognition (HAR) is essential in many health-related fields. A variety of technologies based on different sensors have been developed for HAR. Among them, fusion from heterogeneous wearable sensors has been developed as it is portable, non-interventional and accurate for HAR. To be applied in real-time use with limited resources, the activity recognition system must be compact and reliable. This requirement can be achieved by feature selection (FS). By eliminating irrelevant and redundant features, the system burden is reduced with good classification performance (CP). This manuscript proposes a two-stage genetic algorithm-based feature selection algorithm with a fixed activation number (GFSFAN), which is implemented on the datasets with a variety of time, frequency and time-frequency domain features extracted from the collected raw time series of nine activities of daily living (ADL). Six classifiers are used to evaluate the effects of selected feature subsets from different FS algorithms on HAR performance. The results indicate that GFSFAN can achieve good CP with a small size. A sensor-to-segment coordinate calibration algorithm and lower-limb joint angle estimation algorithm are introduced. Experiments on the effect of the calibration and the introduction of joint angle on HAR shows that both of them can improve the CP.


Author(s):  
Jiyuan Gao ◽  
Kezheng Shang ◽  
Yichun Ding ◽  
Zhenhai Wen

Flexible and wearable sensors have shown great potential in tremendous applications such as human health monitoring, smart robots, and human–machine interfaces, yet the lack of suitable flexible power supply devices...


Sign in / Sign up

Export Citation Format

Share Document