scholarly journals Elderly Fall Detection using Lightweight Convolution Deep Learning Model

Author(s):  
Neeraj Varshney

Old people, who are living alone at home face serious problem of Falls while moving from one place to another and sometime life threading also. In order to prevent this situation, several fall monitoring systems based on sensor data were proposed. However, there was an issue of misclassification to identify the fall as daily life activities and also routine activity as fall. Towards this end, a deep learning based model is proposed in this paper by using the data of heart rate, BP and sugar level to identify fall along with other daily life activities like walking, running jogging etc. For accurate identification of fall accidents, a publicly accessible data collection and a lightly weighted CNN model are used. The model reports proposed and 98.21 % precision.

Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3363 ◽  
Author(s):  
Taylor Mauldin ◽  
Marc Canby ◽  
Vangelis Metsis ◽  
Anne Ngu ◽  
Coralys Rivera

This paper presents SmartFall, an Android app that uses accelerometer data collected from a commodity-based smartwatch Internet of Things (IoT) device to detect falls. The smartwatch is paired with a smartphone that runs the SmartFall application, which performs the computation necessary for the prediction of falls in real time without incurring latency in communicating with a cloud server, while also preserving data privacy. We experimented with both traditional (Support Vector Machine and Naive Bayes) and non-traditional (Deep Learning) machine learning algorithms for the creation of fall detection models using three different fall datasets (Smartwatch, Notch, Farseeing). Our results show that a Deep Learning model for fall detection generally outperforms more traditional models across the three datasets. This is attributed to the Deep Learning model’s ability to automatically learn subtle features from the raw accelerometer data that are not available to Naive Bayes and Support Vector Machine, which are restricted to learning from a small set of extracted features manually specified. Furthermore, the Deep Learning model exhibits a better ability to generalize to new users when predicting falls, an important quality of any model that is to be successful in the real world. We also present a three-layer open IoT system architecture used in SmartFall, which can be easily adapted for the collection and analysis of other sensor data modalities (e.g., heart rate, skin temperature, walking patterns) that enables remote monitoring of a subject’s wellbeing.


Sensors ◽  
2018 ◽  
Vol 19 (1) ◽  
pp. 57 ◽  
Author(s):  
Renjie Ding ◽  
Xue Li ◽  
Lanshun Nie ◽  
Jiazhen Li ◽  
Xiandong Si ◽  
...  

Human activity recognition (HAR) based on sensor data is a significant problem in pervasive computing. In recent years, deep learning has become the dominating approach in this field, due to its high accuracy. However, it is difficult to make accurate identification for the activities of one individual using a model trained on data from other users. The decline on the accuracy of recognition restricts activity recognition in practice. At present, there is little research on the transferring of deep learning model in this field. This is the first time as we known, an empirical study was carried out on deep transfer learning between users with unlabeled data of target. We compared several widely-used algorithms and found that Maximum Mean Discrepancy (MMD) method is most suitable for HAR. We studied the distribution of features generated from sensor data. We improved the existing method from the aspect of features distribution with center loss and get better results. The observations and insights in this study have deepened the understanding of transfer learning in the activity recognition field and provided guidance for further research.


With the emergence of new concepts like smart hospitals, video surveillance cameras should be introduced in each room of the hospital for the purpose of safety and security. These surveillance cameras can also be used to provide assistance to patients and hospital staff. In particular, a real-time fall of a patient can be detected with the help of these cameras and accordingly, assistance can be provided to them. Different models have already been developed by researchers to detect a human fall using a camera. This paper proposes a vision based deep learning model to detect a human fall. Along with this model, two mathematical based models have also been proposed which uses pre-trained YOLO FCNN and Faster R-CNN architecture to detect the human fall. At the end of this paper, a comparison study has been done on these models to specify which method provides the most accurate results


2019 ◽  
Author(s):  
Ngoc Hieu Tran ◽  
Rui Qiao ◽  
Lei Xin ◽  
Xin Chen ◽  
Baozhen Shan ◽  
...  

AbstractTumor-specific neoantigens play the main role for developing personal vaccines in cancer immunotherapy. We propose, for the first time, a personalized de novo sequencing workflow to identify HLA-I and HLA-II neoantigens directly and solely from mass spectrometry data. Our workflow trains a personal deep learning model on the immunopeptidome of an individual patient and then uses it to predict mutated neoantigens of that patient. This personalized learning and mass spectrometry-based approach enables comprehensive and accurate identification of neoantigens. We applied the workflow to datasets of five melanoma patients and substantially improved the accuracy and identification rate of de novo HLA peptides by 14.3% and 38.9%, respectively. This subsequently led to the identification of 10,440 HLA-I and 1,585 HLA-II new peptides that were not presented in existing databases. Most importantly, our workflow successfully discovered 17 neoantigens of both HLA-I and HLA-II, including those with validated T cell responses and those novel neoantigens that had not been reported in previous studies.


2021 ◽  
Vol 11 (16) ◽  
pp. 7355
Author(s):  
Zhiheng Xu ◽  
Xiong Ding ◽  
Kun Yin ◽  
Ziyue Li ◽  
Joan A. Smyth ◽  
...  

Tick species are considered the second leading vector of human diseases. Different ticks can transmit a variety of pathogens that cause various tick-borne diseases (TBD), such as Lyme disease. Currently, it remains a challenge to diagnose Lyme disease because of its non-specific symptoms. Rapid and accurate identification of tick species plays an important role in predicting potential disease risk for tick-bitten patients, and ensuring timely and effective treatment. Here, we developed, optimized, and tested a smartphone-based deep learning algorithm (termed “TickPhone app”) for tick identification. The deep learning model was trained by more than 2000 tick images and optimized by different parameters, including normal sizes of images, deep learning architectures, image styles, and training–testing dataset distributions. The optimized deep learning model achieved a training accuracy of ~90% and a validation accuracy of ~85%. The TickPhone app was used to identify 31 independent tick species and achieved an accuracy of 95.69%. Such a simple and easy-to-use TickPhone app showed great potential to estimate epidemiology and risk of tick-borne disease, help health care providers better predict potential disease risk for tick-bitten patients, and ultimately enable timely and effective medical treatment for patients.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6126
Author(s):  
Tae Hyong Kim ◽  
Ahnryul Choi ◽  
Hyun Mu Heo ◽  
Hyunggun Kim ◽  
Joung Hwan Mun

Pre-impact fall detection can detect a fall before a body segment hits the ground. When it is integrated with a protective system, it can directly prevent an injury due to hitting the ground. An impact acceleration peak magnitude is one of key measurement factors that can affect the severity of an injury. It can be used as a design parameter for wearable protective devices to prevent injuries. In our study, a novel method is proposed to predict an impact acceleration magnitude after loss of balance using a single inertial measurement unit (IMU) sensor and a sequential-based deep learning model. Twenty-four healthy participants participated in this study for fall experiments. Each participant worn a single IMU sensor on the waist to collect tri-axial accelerometer and angular velocity data. A deep learning method, bi-directional long short-term memory (LSTM) regression, is applied to predict a fall’s impact acceleration magnitude prior to fall impact (a fall in five directions). To improve prediction performance, a data augmentation technique with increment of dataset is applied. Our proposed model showed a mean absolute percentage error (MAPE) of 6.69 ± 0.33% with r value of 0.93 when all three different types of data augmentation techniques are applied. Additionally, there was a significant reduction of MAPE by 45.2% when the number of training datasets was increased by 4-fold. These results show that impact acceleration magnitude can be used as an activation parameter for fall prevention such as in a wearable airbag system by optimizing deployment process to minimize fall injury in real time.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1064
Author(s):  
I Nyoman Kusuma Wardana ◽  
Julian W. Gardner ◽  
Suhaib A. Fahmy

Accurate air quality monitoring requires processing of multi-dimensional, multi-location sensor data, which has previously been considered in centralised machine learning models. These are often unsuitable for resource-constrained edge devices. In this article, we address this challenge by: (1) designing a novel hybrid deep learning model for hourly PM2.5 pollutant prediction; (2) optimising the obtained model for edge devices; and (3) examining model performance running on the edge devices in terms of both accuracy and latency. The hybrid deep learning model in this work comprises a 1D Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to predict hourly PM2.5 concentration. The results show that our proposed model outperforms other deep learning models, evaluated by calculating RMSE and MAE errors. The proposed model was optimised for edge devices, the Raspberry Pi 3 Model B+ (RPi3B+) and Raspberry Pi 4 Model B (RPi4B). This optimised model reduced file size to a quarter of the original, with further size reduction achieved by implementing different post-training quantisation. In total, 8272 hourly samples were continuously fed to the edge device, with the RPi4B executing the model twice as fast as the RPi3B+ in all quantisation modes. Full-integer quantisation produced the lowest execution time, with latencies of 2.19 s and 4.73 s for RPi4B and RPi3B+, respectively.


Sign in / Sign up

Export Citation Format

Share Document