Yee Tan ◽  
Yan Wong ◽  
Syafeeza Radzi
Information ◽  
2019 ◽  
Vol 10 (6) ◽  
pp. 203 ◽  
Jun Long ◽  
Wuqing Sun ◽  
Zhan Yang ◽  
Osolo Ian Raymond

Human activity recognition (HAR) using deep neural networks has become a hot topic in human–computer interaction. Machines can effectively identify human naturalistic activities by learning from a large collection of sensor data. Activity recognition is not only an interesting research problem but also has many real-world practical applications. Based on the success of residual networks in achieving a high level of aesthetic representation of automatic learning, we propose a novel asymmetric residual network, named ARN. ARN is implemented using two identical path frameworks consisting of (1) a short time window, which is used to capture spatial features, and (2) a long time window, which is used to capture fine temporal features. The long time window path can be made very lightweight by reducing its channel capacity, while still being able to learn useful temporal representations for activity recognition. In this paper, we mainly focus on proposing a new model to improve the accuracy of HAR. In order to demonstrate the effectiveness of the ARN model, we carried out extensive experiments on benchmark datasets (i.e., OPPORTUNITY, UniMiB-SHAR) and compared the results with some conventional and state-of-the-art learning-based methods. We discuss the influence of networks parameters on performance to provide insights about its optimization. Results from our experiments show that ARN is effective in recognizing human activities via wearable datasets.

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3910 ◽  
Taeho Hur ◽  
Jaehun Bang ◽  
Thien Huynh-The ◽  
Jongwon Lee ◽  
Jee-In Kim ◽  

The most significant barrier to success in human activity recognition is extracting and selecting the right features. In traditional methods, the features are chosen by humans, which requires the user to have expert knowledge or to do a large amount of empirical study. Newly developed deep learning technology can automatically extract and select features. Among the various deep learning methods, convolutional neural networks (CNNs) have the advantages of local dependency and scale invariance and are suitable for temporal data such as accelerometer (ACC) signals. In this paper, we propose an efficient human activity recognition method, namely Iss2Image (Inertial sensor signal to Image), a novel encoding technique for transforming an inertial sensor signal into an image with minimum distortion and a CNN model for image-based activity classification. Iss2Image converts real number values from the X, Y, and Z axes into three color channels to precisely infer correlations among successive sensor signal values in three different dimensions. We experimentally evaluated our method using several well-known datasets and our own dataset collected from a smartphone and smartwatch. The proposed method shows higher accuracy than other state-of-the-art approaches on the tested datasets.

2020 ◽  
Yang Xu ◽  
Ting Ting Qiu

With the improvement of people's living standards, the demand for health monitoring and exercise detection is increasing. It is of great significance to study human activity recognition methods that are different from traditional feature extraction methods. This article uses convolutional neural network algorithms in deep learning to automatically extract features of activities related to human life. It uses a stochastic gradient descent algorithm to optimize the parameters of the convolutional neural network. The trained network model is compressed on STM32CubeMX-AI. Finally, this article introduces the use of neural networks on embedded devices to recognize six human activities of daily life, such as sitting, standing, walking, jogging, upstairs and downstairs. The acceleration sensor related to human activity information is used to obtain the relevant characteristics of the activity, thereby solving the human activity recognition (HAR) problem. The network structure of the constructed CNN model is shown in Figure 1, including an input layer, two convolutional layers and two pooling layers. After comparing the average accuracy of each set of experiments and the test set of the best model obtained from it, the best model is then selected.

Sign in / Sign up

Export Citation Format

Share Document