scholarly journals Intelligent system for human activity recognition in IoT environment

Author(s):  
Hassan Khaled ◽  
Osama Abu-Elnasr ◽  
Samir Elmougy ◽  
A. S. Tolba

AbstractIn recent years, the adoption of machine learning has grown steadily in different fields affecting the day-to-day decisions of individuals. This paper presents an intelligent system for recognizing human’s daily activities in a complex IoT environment. An enhanced model of capsule neural network called 1D-HARCapsNe is proposed. This proposed model consists of convolution layer, primary capsule layer, activity capsules flat layer and output layer. It is validated using WISDM dataset collected via smart devices and normalized using the random-SMOTE algorithm to handle the imbalanced behavior of the dataset. The experimental results indicate the potential and strengths of the proposed 1D-HARCapsNet that achieved enhanced performance with an accuracy of 98.67%, precision of 98.66%, recall of 98.67%, and F1-measure of 0.987 which shows major performance enhancement compared to the Conventional CapsNet (accuracy 90.11%, precision 91.88%, recall 89.94%, and F1-measure 0.93).

Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7853
Author(s):  
Aleksej Logacjov ◽  
Kerstin Bach ◽  
Atle Kongsvold ◽  
Hilde Bremseth Bårdstu ◽  
Paul Jarle Mork

Existing accelerometer-based human activity recognition (HAR) benchmark datasets that were recorded during free living suffer from non-fixed sensor placement, the usage of only one sensor, and unreliable annotations. We make two contributions in this work. First, we present the publicly available Human Activity Recognition Trondheim dataset (HARTH). Twenty-two participants were recorded for 90 to 120 min during their regular working hours using two three-axial accelerometers, attached to the thigh and lower back, and a chest-mounted camera. Experts annotated the data independently using the camera’s video signal and achieved high inter-rater agreement (Fleiss’ Kappa =0.96). They labeled twelve activities. The second contribution of this paper is the training of seven different baseline machine learning models for HAR on our dataset. We used a support vector machine, k-nearest neighbor, random forest, extreme gradient boost, convolutional neural network, bidirectional long short-term memory, and convolutional neural network with multi-resolution blocks. The support vector machine achieved the best results with an F1-score of 0.81 (standard deviation: ±0.18), recall of 0.85±0.13, and precision of 0.79±0.22 in a leave-one-subject-out cross-validation. Our highly professional recordings and annotations provide a promising benchmark dataset for researchers to develop innovative machine learning approaches for precise HAR in free living.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Yingjie Lin ◽  
Jianning Wu

A novel multichannel dilated convolution neural network for improving the accuracy of human activity recognition is proposed. The proposed model utilizes the multichannel convolution structure with multiple kernels of various sizes to extract multiscale features of high-dimensional data of human activity during convolution operation and not to consider the use of the pooling layers that are used in the traditional convolution with dilated convolution. Its advantage is that the dilated convolution can first capture intrinsical sequence information by expanding the field of convolution kernel without increasing the parameter amount of the model. And then, the multichannel structure can be employed to extract multiscale gait features by forming multiple convolution paths. The open human activity recognition dataset is used to evaluate the effectiveness of our proposed model. The experimental results showed that our model achieves an accuracy of 95.49%, with the time to identify a single sample being approximately 0.34 ms on a low-end machine. These results demonstrate that our model is an efficient real-time HAR model, which can gain the representative features from sensor signals at low computation and is hopeful for the effective tool in practical applications.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Zongying Liu ◽  
Shaoxi Li ◽  
Jiangling Hao ◽  
Jingfeng Hu ◽  
Mingyang Pan

With accumulation of data and development of artificial intelligence, human activity recognition attracts lots of attention from researchers. Many classic machine learning algorithms, such as artificial neural network, feed forward neural network, K-nearest neighbors, and support vector machine, achieve good performance for detecting human activity. However, these algorithms have their own limitations and their prediction accuracy still has space to improve. In this study, we focus on K-nearest neighbors (KNN) and solve its limitations. Firstly, kernel method is employed in model KNN, which transforms the input features to be the high-dimensional features. The proposed model KNN with kernel (K-KNN) improves the accuracy of classification. Secondly, a novel reduced kernel method is proposed and used in model K-KNN, which is named as Reduced Kernel KNN (RK-KNN). It reduces the processing time and enhances the classification performance. Moreover, this study proposes an approach of defining number of K neighbors, which reduces the parameter dependency problem. Based on the experimental works, the proposed RK-KNN obtains the best performance in benchmarks and human activity datasets compared with other models. It has super classification ability in human activity recognition. The accuracy of human activity data is 91.60% for HAPT and 92.67% for Smartphone, respectively. Averagely, compared with the conventional KNN, the proposed model RK-KNN increases the accuracy by 1.82% and decreases standard deviation by 0.27. The small gap of processing time between KNN and RK-KNN in all datasets is only 1.26 seconds.


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Min-Cheol Kwon ◽  
Sunwoong Choi

Human activity recognition using wearable devices has been actively investigated in a wide range of applications. Most of them, however, either focus on simple activities wherein whole body movement is involved or require a variety of sensors to identify daily activities. In this study, we propose a human activity recognition system that collects data from an off-the-shelf smartwatch and uses an artificial neural network for classification. The proposed system is further enhanced using location information. We consider 11 activities, including both simple and daily activities. Experimental results show that various activities can be classified with an accuracy of 95%.


Author(s):  
Pankaj Khatiwada ◽  
Matrika Subedi ◽  
Ayan Chatterjee ◽  
Martin Wulf Gerdes

— In a smart healthcare system," Human Activity Recognition (HAR)" is considered as an efficient approach in pervasive computing from activity sensor readings. The "Ambient Assisted Living (AAL)" in the home or community helps the people to provide independent care and enhanced living quality. However, many AAL models are restricted to multiple factors that include both the computational cost and system complexity. Moreover, the HAR concept has more relevance because of its applications, such as content-based video search, sports play analysis, crowd behavior prediction systems, patient monitoring systems, and surveillance systems. This paper attempts to implement the HAR system using a popular deep learning algorithm, namely "Recurrent Neural Network (RNN)" with the activity data collected from smart activity sensors over time, and it is publicly available in the "UC Irvine Machine Learning Repository (UCI)". The proposed model involves three processes: (1) data collection, (b) optimal feature learning, and (c) activity recognition. The data gathered from the benchmark repository was initially subjected to optimal feature selection that helped to select the most significant features. The proposed optimal feature selection method is based on a new meta-heuristic algorithm called "Colliding Bodies Optimization (CBO)". An objective function derived from the recognition accuracy has been used for accomplishing the optimal feature selection. The proposed model on the concerned benchmark dataset outperformed the conventional models with enhanced performance.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3845
Author(s):  
Ankita ◽  
Shalli Rani ◽  
Himanshi Babbar ◽  
Sonya Coleman ◽  
Aman Singh ◽  
...  

Traditional pattern recognition approaches have gained a lot of popularity. However, these are largely dependent upon manual feature extraction, which makes the generalized model obscure. The sequences of accelerometer data recorded can be classified by specialized smartphones into well known movements that can be done with human activity recognition. With the high success and wide adaptation of deep learning approaches for the recognition of human activities, these techniques are widely used in wearable devices and smartphones to recognize the human activities. In this paper, convolutional layers are combined with long short-term memory (LSTM), along with the deep learning neural network for human activities recognition (HAR). The proposed model extracts the features in an automated way and categorizes them with some model attributes. In general, LSTM is alternative form of recurrent neural network (RNN) which is famous for temporal sequences’ processing. In the proposed architecture, a dataset of UCI-HAR for Samsung Galaxy S2 is used for various human activities. The CNN classifier, which should be taken single, and LSTM models should be taken in series and take the feed data. For each input, the CNN model is applied, and each input image’s output is transferred to the LSTM classifier as a time step. The number of filter maps for mapping of the various portions of image is the most important hyperparameter used. Transformation on the basis of observations takes place by using Gaussian standardization. CNN-LSTM, a proposed model, is an efficient and lightweight model that has shown high robustness and better activity detection capability than traditional algorithms by providing the accuracy of 97.89%.


2021 ◽  
Vol 9 (2) ◽  
pp. 357-376
Author(s):  
Md. Khaliluzzaman ◽  
Md. Abu Bakar Siddiq Sayem ◽  
Lutful KaderMisbah

Human Activity Recognition (HAR), a vast area of a computer vision research, has gained standings in recent years due to its applications in various fields. As human activity has diversification in action, interaction, and it embraces a large amount of data and powerful computational resources, it is very difficult to recognize human activities from an image. In order to solve the computational cost and vanishing gradient problem, in this work, we have proposed a revised simple convolutional neural network (CNN) model named Human Activity Recognition Network (HActivityNet) that is automatically extract and learn features and recognize activities in a rapid, precise and consistent manner. To solve the problem of imbalanced positive and negative data, we have created two datasets, one is HARDataset1 dataset which is created by extracted image frames from KTH dataset, and another one is HARDataset2 dataset prepared from activity video frames performed by us. The comprehensive experiment shows that our model performs better with respect to the present state of the art models. The proposed model attains an accuracy of 99.5% on HARDatase1 and almost 100% on HARDataset2 dataset. The proposed model also performed well on real data.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1715
Author(s):  
Michele Alessandrini ◽  
Giorgio Biagetti ◽  
Paolo Crippa ◽  
Laura Falaschetti ◽  
Claudio Turchetti

Photoplethysmography (PPG) is a common and practical technique to detect human activity and other physiological parameters and is commonly implemented in wearable devices. However, the PPG signal is often severely corrupted by motion artifacts. The aim of this paper is to address the human activity recognition (HAR) task directly on the device, implementing a recurrent neural network (RNN) in a low cost, low power microcontroller, ensuring the required performance in terms of accuracy and low complexity. To reach this goal, (i) we first develop an RNN, which integrates PPG and tri-axial accelerometer data, where these data can be used to compensate motion artifacts in PPG in order to accurately detect human activity; (ii) then, we port the RNN to an embedded device, Cloud-JAM L4, based on an STM32 microcontroller, optimizing it to maintain an accuracy of over 95% while requiring modest computational power and memory resources. The experimental results show that such a system can be effectively implemented on a constrained-resource system, allowing the design of a fully autonomous wearable embedded system for human activity recognition and logging.


Sign in / Sign up

Export Citation Format

Share Document