A Feature Set Evaluation for Activity Recognition with Body-Worn Inertial Sensors

Author(s):  
Syed Agha Muhammad ◽  
Bernd Niklas Klein ◽  
Kristof Van Laerhoven ◽  
Klaus David
Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4132 ◽  
Author(s):  
Ku Ku Abd. Rahim ◽  
I. Elamvazuthi ◽  
Lila Izhar ◽  
Genci Capi

Increasing interest in analyzing human gait using various wearable sensors, which is known as Human Activity Recognition (HAR), can be found in recent research. Sensors such as accelerometers and gyroscopes are widely used in HAR. Recently, high interest has been shown in the use of wearable sensors in numerous applications such as rehabilitation, computer games, animation, filmmaking, and biomechanics. In this paper, classification of human daily activities using Ensemble Methods based on data acquired from smartphone inertial sensors involving about 30 subjects with six different activities is discussed. The six daily activities are walking, walking upstairs, walking downstairs, sitting, standing and lying. It involved three stages of activity recognition; namely, data signal processing (filtering and segmentation), feature extraction and classification. Five types of ensemble classifiers utilized are Bagging, Adaboost, Rotation forest, Ensembles of nested dichotomies (END) and Random subspace. These ensemble classifiers employed Support vector machine (SVM) and Random forest (RF) as the base learners of the ensemble classifiers. The data classification is evaluated with the holdout and 10-fold cross-validation evaluation methods. The performance of each human daily activity was measured in terms of precision, recall, F-measure, and receiver operating characteristic (ROC) curve. In addition, the performance is also measured based on the comparison of overall accuracy rate of classification between different ensemble classifiers and base learners. It was observed that overall, SVM produced better accuracy rate with 99.22% compared to RF with 97.91% based on a random subspace ensemble classifier.


Author(s):  
Chih-Ta Yen ◽  
Jia-De Lin

This study employed wearable inertial sensors integrated with an activity-recognition algorithm to recognize six types of daily activities performed by humans, namely walking, ascending stairs, descending stairs, sitting, standing, and lying. The sensor system consisted of a microcontroller, a three-axis accelerometer, and a three-axis gyro; the algorithm involved collecting and normalizing the activity signals. To simplify the calculation process and to maximize the recognition accuracy, the data were preprocessed through linear discriminant analysis; this reduced their dimensionality and captured their features, thereby reducing the feature space of the accelerometer and gyro signals; they were then verified through the use of six classification algorithms. The new contribution is that after feature extraction, data classification results indicated that an artificial neural network was the most stable and effective of the six algorithms. In the experiment, 20 participants equipped the wearable sensors on their waists to record the aforementioned six types of daily activities and to verify the effectiveness of the sensors. According to the cross-validation results, the combination of linear discriminant analysis and an artificial neural network was the most stable classification algorithm for data generalization; its activity-recognition accuracy was 87.37% on the training data and 80.96% on the test data.


Sensor Review ◽  
2017 ◽  
Vol 37 (1) ◽  
pp. 101-109 ◽  
Author(s):  
Ye Chen ◽  
Zhelong Wang

Purpose Existing studies on human activity recognition using inertial sensors mainly discuss single activities. However, human activities are rather concurrent. A person could be walking while brushing their teeth or lying while making a call. The purpose of this paper is to explore an effective way to recognize concurrent activities. Design/methodology/approach Concurrent activities usually involve behaviors from different parts of the body, which are mainly dominated by the lower limbs and upper body. For this reason, a hierarchical method based on artificial neural networks (ANNs) is proposed to classify them. At the lower level, the state of the lower limbs to which a concurrent activity belongs is firstly recognized by means of one ANN using simple features. Then, the upper-level systems further distinguish between the upper limb movements and infer specific concurrent activity using features processed by the principle component analysis. Findings An experiment is conducted to collect realistic data from five sensor nodes placed on subjects’ wrist, arm, thigh, ankle and chest. Experimental results indicate that the proposed hierarchical method can distinguish between 14 concurrent activities with a high classification rate of 92.6 per cent, which significantly outperforms the single-level recognition method. Practical implications In the future, the research may play an important role in many ways such as daily behavior monitoring, smart assisted living, postoperative rehabilitation and eldercare support. Originality/value To provide more accurate information on people’s behaviors, human concurrent activities are discussed and effectively recognized by using a hierarchical method.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4119 ◽  
Author(s):  
Alexander Diete ◽  
Heiner Stuckenschmidt

In the field of pervasive computing, wearable devices have been widely used for recognizing human activities. One important area in this research is the recognition of activities of daily living where especially inertial sensors and interaction sensors (like RFID tags with scanners) are popular choices as data sources. Using interaction sensors, however, has one drawback: they may not differentiate between proper interaction and simple touching of an object. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g., when an object is only touched but no interaction occurred afterwards. There are, however, many scenarios like medicine intake that rely heavily on correctly recognized activities. In our work, we aim to address this limitation and present a multimodal egocentric-based activity recognition approach. Our solution relies on object detection that recognizes activity-critical objects in a frame. As it is infeasible to always expect a high quality camera view, we enrich the vision features with inertial sensor data that monitors the users’ arm movement. This way we try to overcome the drawbacks of each respective sensor. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an F 1 -measure of up to 79.6%.


Sign in / Sign up

Export Citation Format

Share Document