The effect of window size and lead time on pre-impact fall detection accuracy using support vector machine analysis of waist mounted inertial sensor data

Author(s):  
Omar Aziz ◽  
Colin M. Russell ◽  
Edward J. Park ◽  
Stephen N. Robinovitch
2018 ◽  
Vol 29 (9) ◽  
pp. 2027-2039 ◽  
Author(s):  
Zhangjie Chen ◽  
Ya Wang

This article presents an infrared–ultrasonic sensor fusion approach for support vector machine–based fall detection, often required by elderly healthcare. Its detection algorithms and performance evaluation are detailed. The location, size, and temperature profile of the user can be estimated based on a novel sensory fusion algorithm. Different feature sets of the support vector machine–based machine learning algorithm are analyzed and their impact on fall detection accuracy is evaluated and compared empirically. Experiments study three non-fall activities, standing, sitting, and stooping, and two fall actions, forward falling and sideway falling, to simulate daily activities of the elderly. Fall detection accuracy studies are performed based on discretely and continuously (closer to reality) recorded experimental data, respectively. For the discrete data recording, an average accuracy of 92.2% is achieved when the stand-alone Grid-EYE is used and the accuracy is increased to 96.7% when sensor fusion is used. For the continuous data recording (180 training sets, 60 test sets at each distance), an average accuracy less than 70.0% is achieved when the stand-alone Grid-EYE is used and the accuracy is increased to around 90.3% after sensor fusion. New features will be explored in the next step to further increase detection accuracy.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3363 ◽  
Author(s):  
Taylor Mauldin ◽  
Marc Canby ◽  
Vangelis Metsis ◽  
Anne Ngu ◽  
Coralys Rivera

This paper presents SmartFall, an Android app that uses accelerometer data collected from a commodity-based smartwatch Internet of Things (IoT) device to detect falls. The smartwatch is paired with a smartphone that runs the SmartFall application, which performs the computation necessary for the prediction of falls in real time without incurring latency in communicating with a cloud server, while also preserving data privacy. We experimented with both traditional (Support Vector Machine and Naive Bayes) and non-traditional (Deep Learning) machine learning algorithms for the creation of fall detection models using three different fall datasets (Smartwatch, Notch, Farseeing). Our results show that a Deep Learning model for fall detection generally outperforms more traditional models across the three datasets. This is attributed to the Deep Learning model’s ability to automatically learn subtle features from the raw accelerometer data that are not available to Naive Bayes and Support Vector Machine, which are restricted to learning from a small set of extracted features manually specified. Furthermore, the Deep Learning model exhibits a better ability to generalize to new users when predicting falls, an important quality of any model that is to be successful in the real world. We also present a three-layer open IoT system architecture used in SmartFall, which can be easily adapted for the collection and analysis of other sensor data modalities (e.g., heart rate, skin temperature, walking patterns) that enables remote monitoring of a subject’s wellbeing.


Information ◽  
2020 ◽  
Vol 11 (9) ◽  
pp. 416 ◽  
Author(s):  
Lei Chen ◽  
Shurui Fan ◽  
Vikram Kumar ◽  
Yating Jia

Human activity recognition (HAR) has been increasingly used in medical care, behavior analysis, and entertainment industry to improve the experience of users. Most of the existing works use fixed models to identify various activities. However, they do not adapt well to the dynamic nature of human activities. We investigated the activity recognition with postural transition awareness. The inertial sensor data was processed by filters and we used both time domain and frequency domain of the signals to extract the feature set. For the corresponding posture classification, three feature selection algorithms were considered to select 585 features to obtain the optimal feature subset for the posture classification. And We adopted three classifiers (support vector machine, decision tree, and random forest) for comparative analysis. After experiments, the support vector machine gave better classification results than other two methods. By using the support vector machine, we could achieve up to 98% accuracy in the Multi-class classification. Finally, the results were verified by probability estimation.


2014 ◽  
Vol 687-691 ◽  
pp. 1003-1006
Author(s):  
Xian Wei Wang ◽  
Fu Cheng Cao

In this study, using simulated falls and activities of daily living (ADL) performed by elderly subjects, the ability to discriminate between falls and ADL was investigated with wearable tri-axial accelerometer sensors, mounted on the chest. The movement data of human body analysis was performed using one-class support vector machine (SVM) to determine the feature of motion types. Experiments to detect falls are performed in four directions: forward, backward, left, and right. The preliminary results show that this method can detect the falls effectively, reduces both false positives and false negatives, while improving fall detection accuracy, and the application can offer a new guarantee for the elderly health.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Vikas Tripathi ◽  
Durgaprasad Gangodkar ◽  
Vivek Latta ◽  
Ankush Mittal

Automated teller machines (ATM) are widely being used to carry out banking transactions and are becoming one of the necessities of everyday life. ATMs facilitate withdrawal, deposit, and transfer of money from one account to another round the clock. However, this convenience is marred by criminal activities like money snatching and attack on customers, which are increasingly affecting the security of bank customers. In this paper, we propose a video based framework that efficiently identifies abnormal activities happening at the ATM installations and generates an alarm during any untoward incidence. The proposed approach makes use of motion history image (MHI) and Hu moments to extract relevant features from video. Principle component analysis has been used to reduce the dimensionality of features and classification has been carried out by using support vector machine. Analysis has been carried out on different video sequences by varying the window size of MHI. The proposed framework is able to distinguish the normal and abnormal activities like money snatching, harm to the customer by virtue of fight, or attack on the customer with an average accuracy of 95.73%.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew P. Creagh ◽  
Florian Lipsmeier ◽  
Michael Lindemann ◽  
Maarten De Vos

AbstractThe emergence of digital technologies such as smartphones in healthcare applications have demonstrated the possibility of developing rich, continuous, and objective measures of multiple sclerosis (MS) disability that can be administered remotely and out-of-clinic. Deep Convolutional Neural Networks (DCNN) may capture a richer representation of healthy and MS-related ambulatory characteristics from the raw smartphone-based inertial sensor data than standard feature-based methodologies. To overcome the typical limitations associated with remotely generated health data, such as low subject numbers, sparsity, and heterogeneous data, a transfer learning (TL) model from similar large open-source datasets was proposed. Our TL framework leveraged the ambulatory information learned on human activity recognition (HAR) tasks collected from wearable smartphone sensor data. It was demonstrated that fine-tuning TL DCNN HAR models towards MS disease recognition tasks outperformed previous Support Vector Machine (SVM) feature-based methods, as well as DCNN models trained end-to-end, by upwards of 8–15%. A lack of transparency of “black-box” deep networks remains one of the largest stumbling blocks to the wider acceptance of deep learning for clinical applications. Ensuing work therefore aimed to visualise DCNN decisions attributed by relevance heatmaps using Layer-Wise Relevance Propagation (LRP). Through the LRP framework, the patterns captured from smartphone-based inertial sensor data that were reflective of those who are healthy versus people with MS (PwMS) could begin to be established and understood. Interpretations suggested that cadence-based measures, gait speed, and ambulation-related signal perturbations were distinct characteristics that distinguished MS disability from healthy participants. Robust and interpretable outcomes, generated from high-frequency out-of-clinic assessments, could greatly augment the current in-clinic assessment picture for PwMS, to inform better disease management techniques, and enable the development of better therapeutic interventions.


2019 ◽  
Vol 6 (5) ◽  
pp. 190001 ◽  
Author(s):  
Katherine E. Klug ◽  
Christian M. Jennings ◽  
Nicholas Lytal ◽  
Lingling An ◽  
Jeong-Yeol Yoon

A straightforward method for classifying heavy metal ions in water is proposed using statistical classification and clustering techniques from non-specific microparticle scattering data. A set of carboxylated polystyrene microparticles of sizes 0.91, 0.75 and 0.40 µm was mixed with the solutions of nine heavy metal ions and two control cations, and scattering measurements were collected at two angles optimized for scattering from non-aggregated and aggregated particles. Classification of these observations was conducted and compared among several machine learning techniques, including linear discriminant analysis, support vector machine analysis, K-means clustering and K-medians clustering. This study found the highest classification accuracy using the linear discriminant and support vector machine analysis, each reporting high classification rates for heavy metal ions with respect to the model. This may be attributed to moderate correlation between detection angle and particle size. These classification models provide reasonable discrimination between most ion species, with the highest distinction seen for Pb(II), Cd(II), Ni(II) and Co(II), followed by Fe(II) and Fe(III), potentially due to its known sorption with carboxyl groups. The support vector machine analysis was also applied to three different mixture solutions representing leaching from pipes and mine tailings, and showed good correlation with single-species data, specifically with Pb(II) and Ni(II). With more expansive training data and further processing, this method shows promise for low-cost and portable heavy metal identification and sensing.


2011 ◽  
Vol 80-81 ◽  
pp. 490-494 ◽  
Author(s):  
Han Bing Liu ◽  
Yu Bo Jiao ◽  
Ya Feng Gong ◽  
Hai Peng Bi ◽  
Yan Yi Sun

A support vector machine (SVM) optimized by particle swarm optimization (PSO)-based damage identification method is proposed in this paper. The classification accuracy of the damage localization and the detection accuracy of severity are used as the fitness function, respectively. The best and can be obtained through velocity and position updating of PSO. A simply supported beam bridge with five girders is provided as numerical example, damage cases with single and multiple suspicious damage elements are established to verify the feasibility of the proposed method. Numerical results indicate that the SVM optimized by PSO method can effectively identify the damage locations and severity.


2017 ◽  
Vol 13 (5) ◽  
pp. 155014771770741 ◽  
Author(s):  
Kaibo Fan ◽  
Ping Wang ◽  
Yan Hu ◽  
Bingjie Dou

10.2196/13961 ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. e13961
Author(s):  
Kim Sarah Sczuka ◽  
Lars Schwickert ◽  
Clemens Becker ◽  
Jochen Klenk

Background Falls are a common health problem, which in the worst cases can lead to death. To develop reliable fall detection algorithms as well as suitable prevention interventions, it is important to understand circumstances and characteristics of real-world fall events. Although falls are common, they are seldom observed, and reports are often biased. Wearable inertial sensors provide an objective approach to capture real-world fall signals. However, it is difficult to directly derive visualization and interpretation of body movements from the fall signals, and corresponding video data is rarely available. Objective The re-enactment method uses available information from inertial sensors to simulate fall events, replicate the data, validate the simulation, and thereby enable a more precise description of the fall event. The aim of this paper is to describe this method and demonstrate the validity of the re-enactment approach. Methods Real-world fall data, measured by inertial sensors attached to the lower back, were selected from the Fall Repository for the Design of Smart and Self-Adaptive Environments Prolonging Independent Living (FARSEEING) database. We focused on well-described fall events such as stumbling to be re-enacted under safe conditions in a laboratory setting. For the purposes of exemplification, we selected the acceleration signal of one fall event to establish a detailed simulation protocol based on identified postures and trunk movement sequences. The subsequent re-enactment experiments were recorded with comparable inertial sensor configurations as well as synchronized video cameras to analyze the movement behavior in detail. The re-enacted sensor signals were then compared with the real-world signals to adapt the protocol and repeat the re-enactment method if necessary. The similarity between the simulated and the real-world fall signals was analyzed with a dynamic time warping algorithm, which enables the comparison of two temporal sequences varying in speed and timing. Results A fall example from the FARSEEING database was used to show the feasibility of producing a similar sensor signal with the re-enactment method. Although fall events were heterogeneous concerning chronological sequence and curve progression, it was possible to reproduce a good approximation of the motion of a person’s center of mass during fall events based on the available sensor information. Conclusions Re-enactment is a promising method to understand and visualize the biomechanics of inertial sensor-recorded real-world falls when performed in a suitable setup, especially if video data is not available.


Sign in / Sign up

Export Citation Format

Share Document