scholarly journals A Hierarchical Approach to Activity Recognition and Fall Detection Using Wavelets and Adaptive Pooling

Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6653
Author(s):  
Abbas Shah Syed ◽  
Daniel Sierra-Sosa ◽  
Anup Kumar ◽  
Adel Elmaghraby

Human activity recognition has been a key study topic in the development of cyber physical systems and assisted living applications. In particular, inertial sensor based systems have become increasingly popular because they do not restrict users’ movement and are also relatively simple to implement compared to other approaches. In this paper, we present a hierarchical classification framework based on wavelets and adaptive pooling for activity recognition and fall detection predicting fall direction and severity. To accomplish this, windowed segments were extracted from each recording of inertial measurements from the SisFall dataset. A combination of wavelet based feature extraction and adaptive pooling was used before a classification framework was applied to determine the output class. Furthermore, tests were performed to determine the best observation window size and the sensor modality to use. Based on the experiments the best window size was found to be 3 s and the best sensor modality was found to be a combination of accelerometer and gyroscope measurements. These were used to perform activity recognition and fall detection with a resulting weighted F1 score of 94.67%. This framework is novel in terms of the approach to the human activity recognition and fall detection problem as it provides a scheme that is computationally less intensive while providing promising results and therefore can contribute to edge deployment of such systems.

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3910 ◽  
Author(s):  
Taeho Hur ◽  
Jaehun Bang ◽  
Thien Huynh-The ◽  
Jongwon Lee ◽  
Jee-In Kim ◽  
...  

The most significant barrier to success in human activity recognition is extracting and selecting the right features. In traditional methods, the features are chosen by humans, which requires the user to have expert knowledge or to do a large amount of empirical study. Newly developed deep learning technology can automatically extract and select features. Among the various deep learning methods, convolutional neural networks (CNNs) have the advantages of local dependency and scale invariance and are suitable for temporal data such as accelerometer (ACC) signals. In this paper, we propose an efficient human activity recognition method, namely Iss2Image (Inertial sensor signal to Image), a novel encoding technique for transforming an inertial sensor signal into an image with minimum distortion and a CNN model for image-based activity classification. Iss2Image converts real number values from the X, Y, and Z axes into three color channels to precisely infer correlations among successive sensor signal values in three different dimensions. We experimentally evaluated our method using several well-known datasets and our own dataset collected from a smartphone and smartwatch. The proposed method shows higher accuracy than other state-of-the-art approaches on the tested datasets.


2021 ◽  
Author(s):  
Gábor Csizmadia ◽  
Krisztina Liszkai-Peres ◽  
Bence Ferdinandy ◽  
Ádám Miklósi ◽  
Veronika Konok

Abstract Human activity recognition (HAR) using machine learning (ML) methods is a relatively new method for collecting and analyzing large amounts of human behavioral data using special wearable sensors. Our main goal was to find a reliable method which could automatically detect various playful and daily routine activities in children. We defined 40 activities for ML recognition, and we collected activity motion data by means of wearable smartwatches with a special SensKid software. We analyzed the data of 34 children (19 girls, 15 boys; age range: 6.59 – 8.38; median age = 7.47). All children were typically developing first graders from three elementary schools. The activity recognition was a binary classification task which was evaluated with a Light Gradient Boosted Machine (LGBM)learning algorithm, a decision based method with a 3-fold cross validation. We used the sliding window technique during the signal processing, and we aimed at finding the best window size for the analysis of each behavior element to achieve the most effective settings. Seventeen activities out of 40 were successfully recognized with AUC values above 0.8. The window size had no significant effect. The overall accuracy was 0.95, which is at the top segment of the previously published similar HAR data. In summary, the LGBM is a very promising solution for HAR. In line with previous findings, our results provide a firm basis for a more precise and effective recognition system that can make human behavioral analysis faster and more objective.


The rise in life expectancy rate and dwindled birth rate in new age society has led to the phenomenon of population ageing which is being witnessed across the world from past few decades. India is also a part of this demographic transition which will have the direct impact on the societal and economic conditions of the country. In order to effectively deal with the prevailing phenomenon, stakeholders involved are coming up with the Information and Communication Technology (ICT) based ecosystem to address the needs of elderly people such as independent living, activity recognition, vital health sign monitoring, prevention from social isolation etc. Ambient Assisted Living (AAL) is one such ecosystem which is capable of providing safe and secured living environment for the elderly and disabled people. In this paper we will focus on reviewing the sensor based Human Activity Recognition (HAR) and Vital Health Sign Monitoring (VHSM) which is applicable for AAL environments. At first we generally describe the AAL environment. Next we present brief insights into sensor modalities and different deep learning architectures. Later, we survey the existing literature for HAR and VHSM based on sensor modality and deep learning approach used.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6316
Author(s):  
Dinis Moreira ◽  
Marília Barandas ◽  
Tiago Rocha ◽  
Pedro Alves ◽  
Ricardo Santos ◽  
...  

With the fast increase in the demand for location-based services and the proliferation of smartphones, the topic of indoor localization is attracting great interest. In indoor environments, users’ performed activities carry useful semantic information. These activities can then be used by indoor localization systems to confirm users’ current relative locations in a building. In this paper, we propose a deep-learning model based on a Convolutional Long Short-Term Memory (ConvLSTM) network to classify human activities within the indoor localization scenario using smartphone inertial sensor data. Results show that the proposed human activity recognition (HAR) model accurately identifies nine types of activities: not moving, walking, running, going up in an elevator, going down in an elevator, walking upstairs, walking downstairs, or going up and down a ramp. Moreover, predicted human activities were integrated within an existing indoor positioning system and evaluated in a multi-story building across several testing routes, with an average positioning error of 2.4 m. The results show that the inclusion of human activity information can reduce the overall localization error of the system and actively contribute to the better identification of floor transitions within a building. The conducted experiments demonstrated promising results and verified the effectiveness of using human activity-related information for indoor localization.


Author(s):  
Wahyu Andhyka Kusuma ◽  
Zamah Sari ◽  
Agus Eko Minarno ◽  
Hardianto Wibowo ◽  
Denar Regata Akbi ◽  
...  

Human activity recognition (HAR) with daily activities have become leading problems in human physical analysis. HAR with wide application in several areas of human physical analysis were increased along with several machine learning methods. This topic such as fall detection, medical rehabilitation or other smart appliance in physical analysis application has increase degree of life. Smart wearable devices with inertial sensor accelerometer and gyroscope were popular sensor for physical analysis. The previous research used this sensor with a various position in the human body part. Activities can classify in three class, static activity (SA), transition activity (TA), and dynamic activity (DA). Activity from complexity in activities can be separated in low and high complexity based on daily activity. Daily activity pattern has the same shape and patterns with gathering sensor. Dataset used in this paper have acquired from 30 volunteers.  Seven basic machine learning algorithm Logistic Regression, Support Vector Machine, Decision Tree, Random Forest, Gradient Boosted and K-Nearest Neighbor. Confusion activities were solved with a simple linear method. The purposed method Logistic Regression achieves 98% accuracy same as SVM with linear kernel, with same result hyperparameter tuning for both methods have the same accuracy. LR and SVC its better used in SA and DA without TA in each recognizing.


Sign in / Sign up

Export Citation Format

Share Document