Human Activity Recognition with Convolution Neural Network Using TIAGo Robot

Author(s):  
Irina Mocanu ◽  
Dana Axinte ◽  
Oana Cramariuc ◽  
Bogdan Cramariuc
2018 ◽  
Vol 232 ◽  
pp. 04024
Author(s):  
Yuchen Wang ◽  
Mantao Wang ◽  
Zhouyu Tan ◽  
Jie Zhang ◽  
Zhiyong Li ◽  
...  

With the growth of building monitoring network, increasing human resource and funds have been invested into building monitoring system. Computer vision technology has been widely used in image recognition recently, and this technology has also been gradually applied to action recognition. There are still many disadvantages of traditional monitoring system. In this paper, a human activity recognition system which based on the convolution neural network is proposed. Using the 3D convolution neural network and the transfer learning technology, the human activity recognition engine is constructed. The Spring MVC framework is used to build the server end, and the system page is designed in HBuilder. The system not only enhances efficiency and functionality of building monitoring system, but also improves the level of building safety.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Yingjie Lin ◽  
Jianning Wu

A novel multichannel dilated convolution neural network for improving the accuracy of human activity recognition is proposed. The proposed model utilizes the multichannel convolution structure with multiple kernels of various sizes to extract multiscale features of high-dimensional data of human activity during convolution operation and not to consider the use of the pooling layers that are used in the traditional convolution with dilated convolution. Its advantage is that the dilated convolution can first capture intrinsical sequence information by expanding the field of convolution kernel without increasing the parameter amount of the model. And then, the multichannel structure can be employed to extract multiscale gait features by forming multiple convolution paths. The open human activity recognition dataset is used to evaluate the effectiveness of our proposed model. The experimental results showed that our model achieves an accuracy of 95.49%, with the time to identify a single sample being approximately 0.34 ms on a low-end machine. These results demonstrate that our model is an efficient real-time HAR model, which can gain the representative features from sensor signals at low computation and is hopeful for the effective tool in practical applications.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1715
Author(s):  
Michele Alessandrini ◽  
Giorgio Biagetti ◽  
Paolo Crippa ◽  
Laura Falaschetti ◽  
Claudio Turchetti

Photoplethysmography (PPG) is a common and practical technique to detect human activity and other physiological parameters and is commonly implemented in wearable devices. However, the PPG signal is often severely corrupted by motion artifacts. The aim of this paper is to address the human activity recognition (HAR) task directly on the device, implementing a recurrent neural network (RNN) in a low cost, low power microcontroller, ensuring the required performance in terms of accuracy and low complexity. To reach this goal, (i) we first develop an RNN, which integrates PPG and tri-axial accelerometer data, where these data can be used to compensate motion artifacts in PPG in order to accurately detect human activity; (ii) then, we port the RNN to an embedded device, Cloud-JAM L4, based on an STM32 microcontroller, optimizing it to maintain an accuracy of over 95% while requiring modest computational power and memory resources. The experimental results show that such a system can be effectively implemented on a constrained-resource system, allowing the design of a fully autonomous wearable embedded system for human activity recognition and logging.


Author(s):  
Muhammad Muaaz ◽  
Ali Chelli ◽  
Martin Wulf Gerdes ◽  
Matthias Pätzold

AbstractA human activity recognition (HAR) system acts as the backbone of many human-centric applications, such as active assisted living and in-home monitoring for elderly and physically impaired people. Although existing Wi-Fi-based human activity recognition methods report good results, their performance is affected by the changes in the ambient environment. In this work, we present Wi-Sense—a human activity recognition system that uses a convolutional neural network (CNN) to recognize human activities based on the environment-independent fingerprints extracted from the Wi-Fi channel state information (CSI). First, Wi-Sense captures the CSI by using a standard Wi-Fi network interface card. Wi-Sense applies the CSI ratio method to reduce the noise and the impact of the phase offset. In addition, it applies the principal component analysis to remove redundant information. This step not only reduces the data dimension but also removes the environmental impact. Thereafter, we compute the processed data spectrogram which reveals environment-independent time-variant micro-Doppler fingerprints of the performed activity. We use these spectrogram images to train a CNN. We evaluate our approach by using a human activity data set collected from nine volunteers in an indoor environment. Our results show that Wi-Sense can recognize these activities with an overall accuracy of 97.78%. To stress on the applicability of the proposed Wi-Sense system, we provide an overview of the standards involved in the health information systems and systematically describe how Wi-Sense HAR system can be integrated into the eHealth infrastructure.


Sign in / Sign up

Export Citation Format

Share Document