Activity Recognition
Recently Published Documents





Sensor Review ◽  
2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Gomathi V. ◽  
Kalaiselvi S. ◽  
Thamarai Selvi D

Purpose This work aims to develop a novel fuzzy associator rule-based fuzzified deep convolutional neural network (FDCNN) architecture for the classification of smartphone sensor-based human activity recognition. This work mainly focuses on fusing the λmax method for weight initialization, as a data normalization technique, to achieve high accuracy of classification. Design/methodology/approach The major contributions of this work are modeled as FDCNN architecture, which is initially fused with a fuzzy logic based data aggregator. This work significantly focuses on normalizing the University of California, Irvine data set’s statistical parameters before feeding that to convolutional neural network layers. This FDCNN model with λmax method is instrumental in ensuring the faster convergence with improved performance accuracy in sensor based human activity recognition. Impact analysis is carried out to validate the appropriateness of the results with hyper-parameter tuning on the proposed FDCNN model with λmax method. Findings The effectiveness of the proposed FDCNN model with λmax method was outperformed than state-of-the-art models and attained with overall accuracy of 97.89% with overall F1 score as 0.9795. Practical implications The proposed fuzzy associate rule layer (FAL) layer is responsible for feature association based on fuzzy rules and regulates the uncertainty in the sensor data because of signal inferences and noises. Also, the normalized data is subjectively grouped based on the FAL kernel structure weights assigned with the λmax method. Social implications Contributed a novel FDCNN architecture that can support those who are keen in advancing human activity recognition (HAR) recognition. Originality/value A novel FDCNN architecture is implemented with appropriate FAL kernel structures.

Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 226
Muhammad Ehatisham-ul-Haq ◽  
Fiza Murtaza ◽  
Muhammad Awais Azam ◽  
Yasar Amin

Advancement in smart sensing and computing technologies has provided a dynamic opportunity to develop intelligent systems for human activity monitoring and thus assisted living. Consequently, many researchers have put their efforts into implementing sensor-based activity recognition systems. However, recognizing people’s natural behavior and physical activities with diverse contexts is still a challenging problem because human physical activities are often distracted by changes in their surroundings/environments. Therefore, in addition to physical activity recognition, it is also vital to model and infer the user’s context information to realize human-environment interactions in a better way. Therefore, this research paper proposes a new idea for activity recognition in-the-wild, which entails modeling and identifying detailed human contexts (such as human activities, behavioral environments, and phone states) using portable accelerometer sensors. The proposed scheme offers a detailed/fine-grained representation of natural human activities with contexts, which is crucial for modeling human-environment interactions in context-aware applications/systems effectively. The proposed idea is validated using a series of experiments, and it achieved an average balanced accuracy of 89.43%, which proves its effectiveness.

Anna Ferrari ◽  
Daniela Micucci ◽  
Marco Mobilio ◽  
Paolo Napoletano

AbstractHuman activity recognition (HAR) is a line of research whose goal is to design and develop automatic techniques for recognizing activities of daily living (ADLs) using signals from sensors. HAR is an active research filed in response to the ever-increasing need to collect information remotely related to ADLs for diagnostic and therapeutic purposes. Traditionally, HAR used environmental or wearable sensors to acquire signals and relied on traditional machine-learning techniques to classify ADLs. In recent years, HAR is moving towards the use of both wearable devices (such as smartphones or fitness trackers, since they are daily used by people and they include reliable inertial sensors), and deep learning techniques (given the encouraging results obtained in the area of computer vision). One of the major challenges related to HAR is population diversity, which makes difficult traditional machine-learning algorithms to generalize. Recently, researchers successfully attempted to address the problem by proposing techniques based on personalization combined with traditional machine learning. To date, no effort has been directed at investigating the benefits that personalization can bring in deep learning techniques in the HAR domain. The goal of our research is to verify if personalization applied to both traditional and deep learning techniques can lead to better performance than classical approaches (i.e., without personalization). The experiments were conducted on three datasets that are extensively used in the literature and that contain metadata related to the subjects. AdaBoost is the technique chosen for traditional machine learning, while convolutional neural network is the one chosen for deep learning. These techniques have shown to offer good performance. Personalization considers both the physical characteristics of the subjects and the inertial signals generated by the subjects. Results suggest that personalization is most effective when applied to traditional machine-learning techniques rather than to deep learning ones. Moreover, results show that deep learning without personalization performs better than any other methods experimented in the paper in those cases where the number of training samples is high and samples are heterogeneous (i.e., they represent a wider spectrum of the population). This suggests that traditional deep learning can be more effective, provided you have a large and heterogeneous dataset, intrinsically modeling the population diversity in the training process.

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262181
Prasetia Utama Putra ◽  
Keisuke Shima ◽  
Koji Shimatani

Multiple cameras are used to resolve occlusion problem that often occur in single-view human activity recognition. Based on the success of learning representation with deep neural networks (DNNs), recent works have proposed DNNs models to estimate human activity from multi-view inputs. However, currently available datasets are inadequate in training DNNs model to obtain high accuracy rate. Against such an issue, this study presents a DNNs model, trained by employing transfer learning and shared-weight techniques, to classify human activity from multiple cameras. The model comprised pre-trained convolutional neural networks (CNNs), attention layers, long short-term memory networks with residual learning (LSTMRes), and Softmax layers. The experimental results suggested that the proposed model could achieve a promising performance on challenging MVHAR datasets: IXMAS (97.27%) and i3DPost (96.87%). A competitive recognition rate was also observed in online classification.

Sourish Gunesh Dhekane ◽  
Shivam Tiwari ◽  
Manan Sharma ◽  
Dip Sankar Banerjee

Qin Ni ◽  
Zhuo Fan ◽  
Lei Zhang ◽  
Bo Zhang ◽  
Xiaochen Zheng ◽  

AbstractHuman activity recognition (HAR) has received more and more attention, which is able to play an important role in many fields, such as healthcare and intelligent home. Thus, we have discussed an application of activity recognition in the healthcare field in this paper. Essential tremor (ET) is a common neurological disorder that can make people with this disease rise involuntary tremor. Nowadays, the disease is easy to be misdiagnosed as other diseases. We have combined the essential tremor and activity recognition to recognize ET patients’ activities and evaluate the degree of ET for providing an auxiliary analysis toward disease diagnosis by utilizing stacked denoising autoencoder (SDAE) model. Meanwhile, it is difficult for model to learn enough useful features due to the small behavior dataset from ET patients. Thus, resampling techniques are proposed to alleviate small sample size and imbalanced samples problems. In our experiment, 20 patients with ET and 5 healthy people have been chosen to collect their acceleration data for activity recognition. The experimental results show the significant result on ET patients activity recognition and the SDAE model has achieved an overall accuracy of 93.33%. What’s more, this model is also used to evaluate the degree of ET and has achieved the accuracy of 95.74%. According to a set of experiments, the model we used is able to acquire significant performance on ET patients activity recognition and degree of tremor assessment.

Sign in / Sign up

Export Citation Format

Share Document