Studi Literatur Human Activity Recognition (HAR) Menggunakan Sensor Inersia

2021 ◽  
Vol 5 (6) ◽  
pp. 1193-1206
Author(s):  
Humaira Nur Pradani ◽  
Faizal Mahananto

Human activity recognition (HAR) is one of the topics that is being widely researched because of its diverse implementation in various fields such as health, construction, and UI / UX. As MEMS (Micro Electro Mechanical Systems) evolves, HAR data acquisition can be done more easily and efficiently using inertial sensors. Inertial sensor data processing for HAR requires a series of processes and a variety of techniques. This literature study aims to summarize the various approaches that have been used in existing research in building the HAR model. Published articles are collected from ScienceDirect, IEEE Xplore, and MDPI over the past five years (2017-2021). From the 38 studies identified, information extracted are the overview of the areas of HAR implementation, data acquisition, public datasets, pre-process methods, feature extraction approaches, feature selection methods, classification models, training scenarios, model performance, and research challenges in this topic. The analysis showed that there is still room to improve the performance of the HAR model. Therefore, future research on the topic of HAR using inertial sensors can focus on extracting and selecting more optimal features, considering the robustness level of the model, increasing the complexity of classified activities, and balancing accuracy with computation time.  

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6316
Author(s):  
Dinis Moreira ◽  
Marília Barandas ◽  
Tiago Rocha ◽  
Pedro Alves ◽  
Ricardo Santos ◽  
...  

With the fast increase in the demand for location-based services and the proliferation of smartphones, the topic of indoor localization is attracting great interest. In indoor environments, users’ performed activities carry useful semantic information. These activities can then be used by indoor localization systems to confirm users’ current relative locations in a building. In this paper, we propose a deep-learning model based on a Convolutional Long Short-Term Memory (ConvLSTM) network to classify human activities within the indoor localization scenario using smartphone inertial sensor data. Results show that the proposed human activity recognition (HAR) model accurately identifies nine types of activities: not moving, walking, running, going up in an elevator, going down in an elevator, walking upstairs, walking downstairs, or going up and down a ramp. Moreover, predicted human activities were integrated within an existing indoor positioning system and evaluated in a multi-story building across several testing routes, with an average positioning error of 2.4 m. The results show that the inclusion of human activity information can reduce the overall localization error of the system and actively contribute to the better identification of floor transitions within a building. The conducted experiments demonstrated promising results and verified the effectiveness of using human activity-related information for indoor localization.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1716 ◽  
Author(s):  
Seungeun Chung ◽  
Jiyoun Lim ◽  
Kyoung Ju Noh ◽  
Gague Kim ◽  
Hyuntae Jeong

In this paper, we perform a systematic study about the on-body sensor positioning and data acquisition details for Human Activity Recognition (HAR) systems. We build a testbed that consists of eight body-worn Inertial Measurement Units (IMU) sensors and an Android mobile device for activity data collection. We develop a Long Short-Term Memory (LSTM) network framework to support training of a deep learning model on human activity data, which is acquired in both real-world and controlled environments. From the experiment results, we identify that activity data with sampling rate as low as 10 Hz from four sensors at both sides of wrists, right ankle, and waist is sufficient in recognizing Activities of Daily Living (ADLs) including eating and driving activity. We adopt a two-level ensemble model to combine class-probabilities of multiple sensor modalities, and demonstrate that a classifier-level sensor fusion technique can improve the classification performance. By analyzing the accuracy of each sensor on different types of activity, we elaborate custom weights for multimodal sensor fusion that reflect the characteristic of individual activities.


Information ◽  
2020 ◽  
Vol 11 (9) ◽  
pp. 416 ◽  
Author(s):  
Lei Chen ◽  
Shurui Fan ◽  
Vikram Kumar ◽  
Yating Jia

Human activity recognition (HAR) has been increasingly used in medical care, behavior analysis, and entertainment industry to improve the experience of users. Most of the existing works use fixed models to identify various activities. However, they do not adapt well to the dynamic nature of human activities. We investigated the activity recognition with postural transition awareness. The inertial sensor data was processed by filters and we used both time domain and frequency domain of the signals to extract the feature set. For the corresponding posture classification, three feature selection algorithms were considered to select 585 features to obtain the optimal feature subset for the posture classification. And We adopted three classifiers (support vector machine, decision tree, and random forest) for comparative analysis. After experiments, the support vector machine gave better classification results than other two methods. By using the support vector machine, we could achieve up to 98% accuracy in the Multi-class classification. Finally, the results were verified by probability estimation.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7179
Author(s):  
Orhan Konak ◽  
Pit Wegner ◽  
Bert Arnrich

Recent trends in ubiquitous computing have led to a proliferation of studies that focus on human activity recognition (HAR) utilizing inertial sensor data that consist of acceleration, orientation and angular velocity. However, the performances of such approaches are limited by the amount of annotated training data, especially in fields where annotating data is highly time-consuming and requires specialized professionals, such as in healthcare. In image classification, this limitation has been mitigated by powerful oversampling techniques such as data augmentation. Using this technique, this work evaluates to what extent transforming inertial sensor data into movement trajectories and into 2D heatmap images can be advantageous for HAR when data are scarce. A convolutional long short-term memory (ConvLSTM) network that incorporates spatiotemporal correlations was used to classify the heatmap images. Evaluation was carried out on Deep Inertial Poser (DIP), a known dataset composed of inertial sensor data. The results obtained suggest that for datasets with large numbers of subjects, using state-of-the-art methods remains the best alternative. However, a performance advantage was achieved for small datasets, which is usually the case in healthcare. Moreover, movement trajectories provide a visual representation of human activities, which can help researchers to better interpret and analyze motion patterns.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7791
Author(s):  
Ge Gao ◽  
Zhixin Li ◽  
Zhan Huan ◽  
Ying Chen ◽  
Jiuzhen Liang ◽  
...  

With the rapid development of the computer and sensor field, inertial sensor data have been widely used in human activity recognition. At present, most relevant studies divide human activities into basic actions and transitional actions, in which basic actions are classified by unified features, while transitional actions usually use context information to determine the category. For the existing single method that cannot well realize human activity recognition, this paper proposes a human activity classification and recognition model based on smartphone inertial sensor data. The model fully considers the feature differences of different properties of actions, uses a fixed sliding window to segment the human activity data of inertial sensors with different attributes and, finally, extracts the features and recognizes them on different classifiers. The experimental results show that dynamic and transitional actions could obtain the best recognition performance on support vector machines, while static actions could obtain better classification effects on ensemble classifiers; as for feature selection, the frequency-domain feature used in dynamic action had a high recognition rate, up to 99.35%. When time-domain features were used for static and transitional actions, higher recognition rates were obtained, 98.40% and 91.98%, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4787 ◽  
Author(s):  
Nati Daniel ◽  
Itzik Klein

Human activity recognition aims to classify the user activity in various applications like healthcare, gesture recognition and indoor navigation. In the latter, smartphone location recognition is gaining more attention as it enhances indoor positioning accuracy. Commonly the smartphone’s inertial sensor readings are used as input to a machine learning algorithm which performs the classification. There are several approaches to tackle such a task: feature based approaches, one dimensional deep learning algorithms, and two dimensional deep learning architectures. When using deep learning approaches, feature engineering is redundant. In addition, while utilizing two-dimensional deep learning approaches enables to utilize methods from the well-established computer vision domain. In this paper, a framework for smartphone location and human activity recognition, based on the smartphone’s inertial sensors, is proposed. The contributions of this work are a novel time series encoding approach, from inertial signals to inertial images, and transfer learning from computer vision domain to the inertial sensors classification problem. Four different datasets are employed to show the benefits of using the proposed approach. In addition, as the proposed framework performs classification on inertial sensors readings, it can be applied for other classification tasks using inertial data. It can also be adopted to handle other types of sensory data collected for a classification task.


2021 ◽  
Vol 15 (6) ◽  
pp. 1-17
Author(s):  
Chenglin Li ◽  
Carrie Lu Tong ◽  
Di Niu ◽  
Bei Jiang ◽  
Xiao Zuo ◽  
...  

Deep learning models for human activity recognition (HAR) based on sensor data have been heavily studied recently. However, the generalization ability of deep models on complex real-world HAR data is limited by the availability of high-quality labeled activity data, which are hard to obtain. In this article, we design a similarity embedding neural network that maps input sensor signals onto real vectors through carefully designed convolutional and Long Short-Term Memory (LSTM) layers. The embedding network is trained with a pairwise similarity loss, encouraging the clustering of samples from the same class in the embedded real space, and can be effectively trained on a small dataset and even on a noisy dataset with mislabeled samples. Based on the learned embeddings, we further propose both nonparametric and parametric approaches for activity recognition. Extensive evaluation based on two public datasets has shown that the proposed similarity embedding network significantly outperforms state-of-the-art deep models on HAR classification tasks, is robust to mislabeled samples in the training set, and can also be used to effectively denoise a noisy dataset.


Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 111
Author(s):  
Pengjia Tu ◽  
Junhuai Li ◽  
Huaijun Wang ◽  
Ting Cao ◽  
Kan Wang

Human activity recognition (HAR) has vital applications in human–computer interaction, somatosensory games, and motion monitoring, etc. On the basis of the human motion accelerate sensor data, through a nonlinear analysis of the human motion time series, a novel method for HAR that is based on non-linear chaotic features is proposed in this paper. First, the C-C method and G-P algorithm are used to, respectively, compute the optimal delay time and embedding dimension. Additionally, a Reconstructed Phase Space (RPS) is formed while using time-delay embedding for the human accelerometer motion sensor data. Subsequently, a two-dimensional chaotic feature matrix is constructed, where the chaotic feature is composed of the correlation dimension and largest Lyapunov exponent (LLE) of attractor trajectory in the RPS. Next, the classification algorithms are used in order to classify and recognize the two different activity classes, i.e., basic and transitional activities. The experimental results show that the chaotic feature has a higher accuracy than traditional time and frequency domain features.


Sign in / Sign up

Export Citation Format

Share Document