An Experimental Comparison Between Seven Classification Algorithms for Activity Recognition

Author(s):  
Salwa O. Slim ◽  
Ayman Atia ◽  
Mostafa-Sami M. Mostafa
Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3035 ◽  
Author(s):  
Dagoberto Cruz-Sandoval ◽  
Jessica Beltran-Marquez ◽  
Matias Garcia-Constantino ◽  
Luis A. Gonzalez-Jasso ◽  
Jesus Favela ◽  
...  

Activity recognition, a key component in pervasive healthcare monitoring, relies on classification algorithms that require labeled data of individuals performing the activity of interest to train accurate models. Labeling data can be performed in a lab setting where an individual enacts the activity under controlled conditions. The ubiquity of mobile and wearable sensors allows the collection of large datasets from individuals performing activities in naturalistic conditions. Gathering accurate data labels for activity recognition is typically an expensive and time-consuming process. In this paper we present two novel approaches for semi-automated online data labeling performed by the individual executing the activity of interest. The approaches have been designed to address two of the limitations of self-annotation: (i) The burden on the user performing and annotating the activity, and (ii) the lack of accuracy due to the user labeling the data minutes or hours after the completion of an activity. The first approach is based on the recognition of subtle finger gestures performed in response to a data-labeling query. The second approach focuses on labeling activities that have an auditory manifestation and uses a classifier to have an initial estimation of the activity, and a conversational agent to ask the participant for clarification or for additional data. Both approaches are described, evaluated in controlled experiments to assess their feasibility and their advantages and limitations are discussed. Results show that while both studies have limitations, they achieve 80% to 90% precision.


Proceedings ◽  
2018 ◽  
Vol 2 (19) ◽  
pp. 1210 ◽  
Author(s):  
Luis A. González-Jasso ◽  
Jesus Favela

Supervised activity recognition algorithms require labeled data to train classification models. Labeling an activity can be performed trough observation, in controlled conditions, or thru self-labeling. The two first approaches are intrusive, which makes the task tedious for the person performing the activity, as well as for the one tagging the activity. This paper proposes a technique for activity labeling using subtle gestures that are simple to execute, and that can be sensed and recognized using smartwatches. The signals obtained by the inertial sensor in a smartwatch are used to train classification algorithms in order to identify the gesture. We obtained data from 15 participants who executed 6 proposed gestures in 3 different positions. 208 characteristics were computed from the accelerometer and gyroscope signals and were used to train two classification algorithms to detect the six proposed gestures. The results obtained achieve a precision of 81% for the 6 subtle gestures, and 91% when using only the first 3 gestures.


Sign in / Sign up

Export Citation Format

Share Document