scholarly journals A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3726 ◽  
Author(s):  
Bandar Almaslukh ◽  
Abdel Artoli ◽  
Jalal Al-Muhtadi

Recently, modern smartphones equipped with a variety of embedded-sensors, such as accelerometers and gyroscopes, have been used as an alternative platform for human activity recognition (HAR), since they are cost-effective, unobtrusive and they facilitate real-time applications. However, the majority of the related works have proposed a position-dependent HAR, i.e., the target subject has to fix the smartphone in a pre-defined position. Few studies have tackled the problem of position-independent HAR. They have tackled the problem either using handcrafted features that are less influenced by the position of the smartphone or by building a position-aware HAR. The performance of these studies still needs more improvement to produce a reliable smartphone-based HAR. Thus, in this paper, we propose a deep convolution neural network model that provides a robust position-independent HAR system. We build and evaluate the performance of the proposed model using the RealWorld HAR public dataset. We find that our deep learning proposed model increases the overall performance compared to the state-of-the-art traditional machine learning method from 84% to 88% for position-independent HAR. In addition, the position detection performance of our model improves superiorly from 89% to 98%. Finally, the recognition time of the proposed model is evaluated in order to validate the applicability of the model for real-time applications.

2019 ◽  
Vol 25 (2) ◽  
pp. 743-755 ◽  
Author(s):  
Shaohua Wan ◽  
Lianyong Qi ◽  
Xiaolong Xu ◽  
Chao Tong ◽  
Zonghua Gu

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 635
Author(s):  
Yong Li ◽  
Luping Wang

Due to the wide application of human activity recognition (HAR) in sports and health, a large number of HAR models based on deep learning have been proposed. However, many existing models ignore the effective extraction of spatial and temporal features of human activity data. This paper proposes a deep learning model based on residual block and bi-directional LSTM (BiLSTM). The model first extracts spatial features of multidimensional signals of MEMS inertial sensors automatically using the residual block, and then obtains the forward and backward dependencies of feature sequence using BiLSTM. Finally, the obtained features are fed into the Softmax layer to complete the human activity recognition. The optimal parameters of the model are obtained by experiments. A homemade dataset containing six common human activities of sitting, standing, walking, running, going upstairs and going downstairs is developed. The proposed model is evaluated on our dataset and two public datasets, WISDM and PAMAP2. The experimental results show that the proposed model achieves the accuracy of 96.95%, 97.32% and 97.15% on our dataset, WISDM and PAMAP2, respectively. Compared with some existing models, the proposed model has better performance and fewer parameters.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3845
Author(s):  
Ankita ◽  
Shalli Rani ◽  
Himanshi Babbar ◽  
Sonya Coleman ◽  
Aman Singh ◽  
...  

Traditional pattern recognition approaches have gained a lot of popularity. However, these are largely dependent upon manual feature extraction, which makes the generalized model obscure. The sequences of accelerometer data recorded can be classified by specialized smartphones into well known movements that can be done with human activity recognition. With the high success and wide adaptation of deep learning approaches for the recognition of human activities, these techniques are widely used in wearable devices and smartphones to recognize the human activities. In this paper, convolutional layers are combined with long short-term memory (LSTM), along with the deep learning neural network for human activities recognition (HAR). The proposed model extracts the features in an automated way and categorizes them with some model attributes. In general, LSTM is alternative form of recurrent neural network (RNN) which is famous for temporal sequences’ processing. In the proposed architecture, a dataset of UCI-HAR for Samsung Galaxy S2 is used for various human activities. The CNN classifier, which should be taken single, and LSTM models should be taken in series and take the feed data. For each input, the CNN model is applied, and each input image’s output is transferred to the LSTM classifier as a time step. The number of filter maps for mapping of the various portions of image is the most important hyperparameter used. Transformation on the basis of observations takes place by using Gaussian standardization. CNN-LSTM, a proposed model, is an efficient and lightweight model that has shown high robustness and better activity detection capability than traditional algorithms by providing the accuracy of 97.89%.


Due to advancement in technology, availability of resources and by increased utilization of on node sensors enormous amount of data is obtained. There is a necessity of analyzing and classifying this physiological information by efficient and effective approaches such as deep learning and artificial intelligence. Human Activity Recognition (HAR) is assuming a dominant role in sports, security, anti-crime, healthcare and also in environmental applications like wildlife observation etc. Most techniques work well for processing offline instead of real- time processing. There are few approaches which provide maximum accuracy for real time processing of large-scale data, one of the compromising approaches is deep learning. Limitation of resources is one of the causes to restrict the usage of deep learning for low power devices which can be worn on our body. Deep learning implementations are known to produce precise results for different computing systems.We suggest a deep learning approach in this paper which integrates features and data learned from inertial sensors with complementary knowledge obtained from a collection of shallow features which generates the possibility of performing real time activity classification accurately. Eliminating the obstructions caused by using deep learning methods for real-time analysis is the aim of this integrated design. Before passing the data into the deep learning framework, we perform spectral analysis to optimize the planned methodology for on-node computation. The accuracy obtained by combined approach is tested by utilizing datasets obtained from laboratory and real world controlled and uncontrolled environment. Our outcomes demonstrate the legitimacy of the methodology on various human action datasets, beating different techniques, including the two strategies utilized inside our consolidated pipeline. We additionally exhibit that our integrated design's classification times are reliable with on node real-time analysis criteria on smart phones and wearable technology.


Sign in / Sign up

Export Citation Format

Share Document