scholarly journals A Benchmark of Data Stream Classification for Human Activity Recognition on Connected Objects

Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6486
Author(s):  
Martin Khannouz ◽  
Tristan Glatard

This paper evaluates data stream classifiers from the perspective of connected devices, focusing on the use case of Human Activity Recognition. We measure both the classification performance and resource consumption (runtime, memory, and power) of five usual stream classification algorithms, implemented in a consistent library, and applied to two real human activity datasets and three synthetic datasets. Regarding classification performance, the results show the overall superiority of the Hoeffding Tree, the Mondrian forest, and the Naïve Bayes classifiers over the Feedforward Neural Network and the Micro Cluster Nearest Neighbor classifiers on four datasets out of six, including the real ones. In addition, the Hoeffding Tree and—to some extent—the Micro Cluster Nearest Neighbor, are the only classifiers that can recover from a concept drift. Overall, the three leading classifiers still perform substantially worse than an offline classifier on the real datasets. Regarding resource consumption, the Hoeffding Tree and the Mondrian forest are the most memory intensive and have the longest runtime; however, no difference in power consumption is found between classifiers. We conclude that stream learning for Human Activity Recognition on connected objects is challenged by two factors which could lead to interesting future work: a high memory consumption and low F1 scores overall.

2016 ◽  
Vol 72 (10) ◽  
pp. 3927-3959 ◽  
Author(s):  
Simon Fong ◽  
Kexing Liu ◽  
Kyungeun Cho ◽  
Raymond Wong ◽  
Sabah Mohammed ◽  
...  

Author(s):  
Moses L. Gadebe ◽  
◽  
Okuthe P. Kogeda ◽  
Sunday O. Ojo

Recognizing human activity in real time with a limited dataset is possible on a resource-constrained device. However, most classification algorithms such as Support Vector Machines, C4.5, and K Nearest Neighbor require a large dataset to accurately predict human activities. In this paper, we present a novel real-time human activity recognition model based on Gaussian Naïve Bayes (GNB) algorithm using a personalized JavaScript object notation dataset extracted from the publicly available Physical Activity Monitoring for Aging People dataset and University of Southern California Human Activity dataset. With the proposed method, the personalized JSON training dataset is extracted and compressed into a 12×8 multi-dimensional array of the time-domain features extracted using a signal magnitude vector and tilt angles from tri-axial accelerometer sensor data. The algorithm is implemented on the Android platform using the Cordova cross-platform framework with HTML5 and JavaScript. Leave-one-activity-out cross validation is implemented as a testTrainer() function, the results of which are presented using a confusion matrix. The testTrainer() function leaves category K as the testing subset and the remaining K-1 as the training dataset to validate the proposed GNB algorithm. The proposed model is inexpensive in terms of memory and computational power owing to the use of a compressed small training dataset. Each K category was repeated five times and the algorithm consistently produced the same result for each test. The result of the simulation using the tilted angle features shows overall precision, recall, F-measure, and accuracy rates of 90%, 99.6%, 94.18%, and 89.51% respectively, in comparison to rates of 36.9%, 75%, 42%, and 36.9% when the signal magnitude vector features were used. The results of the simulations confirmed and proved that when using the tilt angle dataset, the GNB algorithm is superior to Support Vector Machines, C4.5, and K Nearest Neighbor algorithms.


Proceedings ◽  
2018 ◽  
Vol 2 (19) ◽  
pp. 1242 ◽  
Author(s):  
Macarena Espinilla ◽  
Javier Medina ◽  
Alberto Salguero ◽  
Naomi Irvine ◽  
Mark Donnelly ◽  
...  

Data driven approaches for human activity recognition learn from pre-existent large-scale datasets to generate a classification algorithm that can recognize target activities. Typically, several activities are represented within such datasets, characterized by multiple features that are computed from sensor devices. Often, some features are found to be more relevant to particular activities, which can lead to the classification algorithm providing less accuracy in detecting the activity where such features are not so relevant. This work presents an experimentation for human activity recognition with features derived from the acceleration data of a wearable device. Specifically, this work analyzes which features are most relevant for each activity and furthermore investigates which classifier provides the best accuracy with those features. The results obtained indicate that the best classifier is the k-nearest neighbor and furthermore, confirms that there do exist redundant features that generally introduce noise into the classification, leading to decreased accuracy.


2019 ◽  
Vol 10 (2) ◽  
pp. 34-47 ◽  
Author(s):  
Bagavathi Lakshmi ◽  
S.Parthasarathy

Discovering human activities on mobile devices is a challenging task for human action recognition. The ability of a device to recognize its user's activity is important because it enables context-aware applications and behavior. Recently, machine learning algorithms have been increasingly used for human action recognition. During the past few years, principal component analysis and support vector machines is widely used for robust human activity recognition. However, with global dynamic tendency and complex tasks involved, this robust human activity recognition (HAR) results in error and complexity. To deal with this problem, a machine learning algorithm is proposed and explores its application on HAR. In this article, a Max Pool Convolution Neural Network based on Nearest Neighbor (MPCNN-NN) is proposed to perform efficient and effective HAR using smartphone sensors by exploiting the inherent characteristics. The MPCNN-NN framework for HAR consists of three steps. In the first step, for each activity, the features of interest or foreground frame are detected using Median Background Subtraction. The second step consists of organizing the features (i.e. postures) that represent the strongest generic discriminating features (i.e. postures) based on Max Pool. The third and the final step is the HAR based on Nearest Neighbor that postures which maximizes the probability. Experiments have been conducted to demonstrate the superiority of the proposed MPCNN-NN framework on human action dataset, KARD (Kinect Activity Recognition Dataset).


Sign in / Sign up

Export Citation Format

Share Document