scholarly journals Leveraging Smartphone Sensors for Detecting Abnormal Gait for Smart Wearable Mobile Technologies

2021 ◽  
Vol 15 (24) ◽  
pp. 167-175
Author(s):  
Md Shahriar Tasjid ◽  
Ahmed Al Marouf

Walking is one of the most common modes of terrestrial locomotion for humans. Walking is essential for humans to perform most kinds of daily activities. When a person walks, there is a pattern in it, and it is known as gait. Gait analysis is used in sports and healthcare. We can analyze this gait in different ways, like using video captured by the surveillance cameras or depth image cameras in the lab environment. It also can be recognized by wearable sensors. e.g., accelerometer, force sensors, gyroscope, flexible goniometer, magneto resistive sensors, electromagnetic tracking system, force sensors, and electromyography (EMG). Analysis through these sensors required a lab condition, or users must wear these sensors. For detecting abnormality in gait action of a human, we need to incorporate the sensors separately. We can know about one's health condition by abnormal human gait after detecting it. Understanding a regular gait vs. abnormal gait may give insights to the health condition of the subject using the smart wearable technologies. Therefore, in this paper, we proposed a way to analyze abnormal human gait through smartphone sensors. Though smart devices like smartphones and smartwatches are used by most of the person nowadays. So, we can track down their gait using sensors of these intelligent wearable devices. In this study, we used twenty-three (N=23) people to record their walking activities. Among them fourteen people have normal gait actions, and nine people were facing difficulties with their walking due to their illness. To do the stratification of the gait of the subjects, we have adopted five machine learning algorithms with addition a deep learning algorithm. The advantages of the traditional classification are analyzed and compared among themselves. After rigorous performance analysis we found support vector machine (SVM) showing 96% accuracy, highest among the tradition classifiers. 70%, 84%, and 95% accuracy is obtained by the logistic regression, Naïve Bayes, and k-Nearest Neighbor (kNN) classifiers, respectively. As per the state-of-the art, deep learning classifiers has been proven to outperform the traditional classifiers in similar binary classification problems. We have considered the scenario and applied the 2D convolutional neural network (2D-CNN) classification algorithm, which outperformed the other algorithms showing accuracy of 98%. The model can be optimized and can be integrated with the other sensors to be utilized in the mobile wearable devices.

2021 ◽  
Vol 10 (4) ◽  
pp. 1-25
Author(s):  
Nimi W. S. ◽  
P. Subha Hency Jose ◽  
Jegan R.

This paper presents a brief review on present developments in wearable devices and their importance in healthcare networks. The state-of-the-art system architecture on wearable healthcare devices and their design techniques are reviewed and becomes an essential step towards developing a smart device for various biomedical applications which includes diseases classifications and detection, analyzing nature of the bio signals, vital parameters measurement, and e-health monitoring through noninvasive method. From the review on latest published research papers on medical wearable device and bio signal analysis, it can be concluded that it is more important and very essential to design and develop a smart wearable device in healthcare environment for quality signal acquisition and e-health monitoring which leads to effective measures of multiparameter extractions. This will help the medical practitioners to understand the nature of patient health condition easily by visualizing a quality signal by smart wearable devices.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1962
Author(s):  
Enrico Buratto ◽  
Adriano Simonetto ◽  
Gianluca Agresti ◽  
Henrik Schäfer ◽  
Pietro Zanuttigh

In this work, we propose a novel approach for correcting multi-path interference (MPI) in Time-of-Flight (ToF) cameras by estimating the direct and global components of the incoming light. MPI is an error source linked to the multiple reflections of light inside a scene; each sensor pixel receives information coming from different light paths which generally leads to an overestimation of the depth. We introduce a novel deep learning approach, which estimates the structure of the time-dependent scene impulse response and from it recovers a depth image with a reduced amount of MPI. The model consists of two main blocks: a predictive model that learns a compact encoded representation of the backscattering vector from the noisy input data and a fixed backscattering model which translates the encoded representation into the high dimensional light response. Experimental results on real data show the effectiveness of the proposed approach, which reaches state-of-the-art performances.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3068
Author(s):  
Soumaya Dghim ◽  
Carlos M. Travieso-González ◽  
Radim Burget

The use of image processing tools, machine learning, and deep learning approaches has become very useful and robust in recent years. This paper introduces the detection of the Nosema disease, which is considered to be one of the most economically significant diseases today. This work shows a solution for recognizing and identifying Nosema cells between the other existing objects in the microscopic image. Two main strategies are examined. The first strategy uses image processing tools to extract the most valuable information and features from the dataset of microscopic images. Then, machine learning methods are applied, such as a neural network (ANN) and support vector machine (SVM) for detecting and classifying the Nosema disease cells. The second strategy explores deep learning and transfers learning. Several approaches were examined, including a convolutional neural network (CNN) classifier and several methods of transfer learning (AlexNet, VGG-16 and VGG-19), which were fine-tuned and applied to the object sub-images in order to identify the Nosema images from the other object images. The best accuracy was reached by the VGG-16 pre-trained neural network with 96.25%.


Author(s):  
Hanaa Torkey ◽  
Elhossiny Ibrahim ◽  
EZZ El-Din Hemdan ◽  
Ayman El-Sayed ◽  
Marwa A. Shouman

AbstractCommunication between sensors spread everywhere in healthcare systems may cause some missing in the transferred features. Repairing the data problems of sensing devices by artificial intelligence technologies have facilitated the Medical Internet of Things (MIoT) and its emerging applications in Healthcare. MIoT has great potential to affect the patient's life. Data collected from smart wearable devices size dramatically increases with data collected from millions of patients who are suffering from diseases such as diabetes. However, sensors or human errors lead to missing some values of the data. The major challenge of this problem is how to predict this value to maintain the data analysis model performance within a good range. In this paper, a complete healthcare system for diabetics has been used, as well as two new algorithms are developed to handle the crucial problem of missed data from MIoT wearable sensors. The proposed work is based on the integration of Random Forest, mean, class' mean, interquartile range (IQR), and Deep Learning to produce a clean and complete dataset. Which can enhance any machine learning model performance. Moreover, the outliers repair technique is proposed based on dataset class detection, then repair it by Deep Learning (DL). The final model accuracy with the two steps of imputation and outliers repair is 97.41% and 99.71% Area Under Curve (AUC). The used healthcare system is a web-based diabetes classification application using flask to be used in hospitals and healthcare centers for the patient diagnosed with an effective fashion.


2019 ◽  
Vol 5 (1) ◽  
pp. 9-12
Author(s):  
Jyothsna Kondragunta ◽  
Christian Wiede ◽  
Gangolf Hirtz

AbstractBetter handling of neurological or neurodegenerative disorders such as Parkinson’s Disease (PD) is only possible with an early identification of relevant symptoms. Although the entire disease can’t be treated but the effects of the disease can be delayed with proper care and treatment. Due to this fact, early identification of symptoms for the PD plays a key role. Recent studies state that gait abnormalities are clearly evident while performing dual cognitive tasks by people suffering with PD. Researches also proved that the early identification of the abnormal gaits leads to the identification of PD in advance. Novel technologies provide many options for the identification and analysis of human gait. These technologies can be broadly classified as wearable and non-wearable technologies. As PD is more prominent in elderly people, wearable sensors may hinder the natural persons movement and is considered out of scope of this paper. Non-wearable technologies especially Image Processing (IP) approaches captures data of the person’s gait through optic sensors Existing IP approaches which perform gait analysis is restricted with the parameters such as angle of view, background and occlusions due to objects or due to own body movements. Till date there exists no researcher in terms of analyzing gait through 3D pose estimation. As deep leaning has proven efficient in 2D pose estimation, we propose an 3D pose estimation along with proper dataset. This paper outlines the advantages and disadvantages of the state-of-the-art methods in application of gait analysis for early PD identification. Furthermore, the importance of extracting the gait parameters from 3D pose estimation using deep learning is outlined.


Nutrients ◽  
2018 ◽  
Vol 10 (12) ◽  
pp. 2005 ◽  
Author(s):  
Frank Lo ◽  
Yingnan Sun ◽  
Jianing Qiu ◽  
Benny Lo

An objective dietary assessment system can help users to understand their dietary behavior and enable targeted interventions to address underlying health problems. To accurately quantify dietary intake, measurement of the portion size or food volume is required. For volume estimation, previous research studies mostly focused on using model-based or stereo-based approaches which rely on manual intervention or require users to capture multiple frames from different viewing angles which can be tedious. In this paper, a view synthesis approach based on deep learning is proposed to reconstruct 3D point clouds of food items and estimate the volume from a single depth image. A distinct neural network is designed to use a depth image from one viewing angle to predict another depth image captured from the corresponding opposite viewing angle. The whole 3D point cloud map is then reconstructed by fusing the initial data points with the synthesized points of the object items through the proposed point cloud completion and Iterative Closest Point (ICP) algorithms. Furthermore, a database with depth images of food object items captured from different viewing angles is constructed with image rendering and used to validate the proposed neural network. The methodology is then evaluated by comparing the volume estimated by the synthesized 3D point cloud with the ground truth volume of the object items.


Sign in / Sign up

Export Citation Format

Share Document