Using additional training sensors to improve single-sensor complex activity recognition

2021 ◽  
Author(s):  
Paula Lago ◽  
Moe Matsuki ◽  
Kohei Adachi ◽  
Sozo Inoue
Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1685
Author(s):  
Sakorn Mekruksavanich ◽  
Anuchit Jitpattanakul

Sensor-based human activity recognition (S-HAR) has become an important and high-impact topic of research within human-centered computing. In the last decade, successful applications of S-HAR have been presented through fruitful academic research and industrial applications, including for healthcare monitoring, smart home controlling, and daily sport tracking. However, the growing requirements of many current applications for recognizing complex human activities (CHA) have begun to attract the attention of the HAR research field when compared with simple human activities (SHA). S-HAR has shown that deep learning (DL), a type of machine learning based on complicated artificial neural networks, has a significant degree of recognition efficiency. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are two different types of DL methods that have been successfully applied to the S-HAR challenge in recent years. In this paper, we focused on four RNN-based DL models (LSTMs, BiLSTMs, GRUs, and BiGRUs) that performed complex activity recognition tasks. The efficiency of four hybrid DL models that combine convolutional layers with the efficient RNN-based models was also studied. Experimental studies on the UTwente dataset demonstrated that the suggested hybrid RNN-based models achieved a high level of recognition performance along with a variety of performance indicators, including accuracy, F1-score, and confusion matrix. The experimental results show that the hybrid DL model called CNN-BiGRU outperformed the other DL models with a high accuracy of 98.89% when using only complex activity data. Moreover, the CNN-BiGRU model also achieved the highest recognition performance in other scenarios (99.44% by using only simple activity data and 98.78% with a combination of simple and complex activities).


Author(s):  
Mohamed H Abdelhafiz ◽  
Mohammed I Awad ◽  
Ahmed Sadek ◽  
Farid Tolbah

This paper describes the development of a human gait activity recognition system. A multi-sensor recognition system, which has been developed for this purpose, was reduced to a single sensor-based recognition system. A sensor election method was devised based on the maximum relevance minimum redundancy feature selector to determine the sensor’s optimum position regarding activity recognition. The election method proved that the thigh has the highest contribution to recognize walking, stairs and ramp ascending, and descending activities. A recognition algorithm (which depends mainly on features that are classified by random forest, and selected by a combined feature selector using the maximum relevance minimum redundancy and genetic algorithm) has been modified to compensate the degradation that occurs in the prediction accuracy due to the reduction in the number of sensors. The first modification was implementing a double layer classifier in order to discriminate between the interfered activities. The second modification was adding physical features to the features dictionary used. These modifications succeeded to improve the prediction accuracy to allow a single sensor recognition system to behave in the same manner as a multi-sensor activity recognition system.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5770 ◽  
Author(s):  
Keshav Thapa ◽  
Zubaer Md. Abdullah Al ◽  
Barsha Lamichhane ◽  
Sung-Hyun Yang

Human activity recognition has become an important research topic within the field of pervasive computing, ambient assistive living (AAL), robotics, health-care monitoring, and many more. Techniques for recognizing simple and single activities are typical for now, but recognizing complex activities such as concurrent and interleaving activity is still a major challenging issue. In this paper, we propose a two-phase hybrid deep machine learning approach using bi-directional Long-Short Term Memory (BiLSTM) and Skip-Chain Conditional random field (SCCRF) to recognize the complex activity. BiLSTM is a sequential generative deep learning inherited from Recurrent Neural Network (RNN). SCCRFs is a distinctive feature of conditional random field (CRF) that can represent long term dependencies. In the first phase of the proposed approach, we recognized the concurrent activities using the BiLSTM technique, and in the second phase, SCCRF identifies the interleaved activity. Accuracy of the proposed framework against the counterpart state-of-art methods using the publicly available datasets in a smart home environment is analyzed. Our experiment’s result surpasses the previously proposed approaches with an average accuracy of more than 93%.


2019 ◽  
Vol 18 (4) ◽  
pp. 857-870 ◽  
Author(s):  
Pratool Bharti ◽  
Debraj De ◽  
Sriram Chellappan ◽  
Sajal K. Das

Author(s):  
Nehal A. Sakr ◽  
Mervat Abu-ElKheir ◽  
A. Atwan ◽  
H. H. Soliman

In our daily lives, humans perform different Activities of Daily Living (ADL), such as cooking, and studying. According to the nature of humans, they perform these activities in a sequential/simple or an overlapping/complex scenario. Many research attempts addressed simple activity recognition, but complex activity recognition is still a challenging issue. Recognition of complex activities is a multilabel classification problem, such that a test instance is assigned to a multiple overlapping activities. Existing data-driven techniques for complex activity recognition can recognize a maximum number of two overlapping activities and require a training dataset of complex (i.e. multilabel) activities. In this paper, we propose a multilabel classification approach for complex activity recognition using a combination of Emerging Patterns and Fuzzy Sets. In our approach, we require a training dataset of only simple (i.e. single-label) activities. First, we use a pattern mining technique to extract discriminative features called Strong Jumping Emerging Patterns (SJEPs) that exclusively represent each activity. Then, our scoring function takes SJEPs and fuzzy membership values of incoming sensor data and outputs the activity label(s). We validate our approach using two different dataset. Experimental results demonstrate the efficiency and superiority of our approach against other approaches.


Sign in / Sign up

Export Citation Format

Share Document