Human Activity Recognition on Smartphones Using a Bidirectional LSTM Network

Author(s):  
Fabio Hernandez ◽  
Luis F. Suarez ◽  
Javier Villamizar ◽  
Miguel Altuve
2020 ◽  
Vol 20 (3) ◽  
pp. 1191-1201 ◽  
Author(s):  
Haobo Li ◽  
Aman Shrestha ◽  
Hadi Heidari ◽  
Julien Le Kernec ◽  
Francesco Fioranelli

Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1636
Author(s):  
Sakorn Mekruksavanich ◽  
Anuchit Jitpattanakul

Human Activity Recognition (HAR) employing inertial motion data has gained considerable momentum in recent years, both in research and industrial applications. From the abstract perspective, this has been driven by an acceleration in the building of intelligent and smart environments and systems that cover all aspects of human life including healthcare, sports, manufacturing, commerce, etc. Such environments and systems necessitate and subsume activity recognition, aimed at recognizing the actions, characteristics, and goals of one or more individuals from a temporal series of observations streamed from one or more sensors. Due to the reliance of conventional Machine Learning (ML) techniques on handcrafted features in the extraction process, current research suggests that deep-learning approaches are more applicable to automated feature extraction from raw sensor data. In this work, the generic HAR framework for smartphone sensor data is proposed, based on Long Short-Term Memory (LSTM) networks for time-series domains. Four baseline LSTM networks are comparatively studied to analyze the impact of using different kinds of smartphone sensor data. In addition, a hybrid LSTM network called 4-layer CNN-LSTM is proposed to improve recognition performance. The HAR method is evaluated on a public smartphone-based dataset of UCI-HAR through various combinations of sample generation processes (OW and NOW) and validation protocols (10-fold and LOSO cross validation). Moreover, Bayesian optimization techniques are used in this study since they are advantageous for tuning the hyperparameters of each LSTM network. The experimental results indicate that the proposed 4-layer CNN-LSTM network performs well in activity recognition, enhancing the average accuracy by up to 2.24% compared to prior state-of-the-art approaches.


Sign in / Sign up

Export Citation Format

Share Document