scholarly journals Can Ensemble Deep Learning Identify People by Their Gait Using Data Collected from Multi-Modal Sensors in Their Insole?

Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 4001 ◽  
Author(s):  
Jucheol Moon ◽  
Nelson Hebert Minaya ◽  
Nhat Anh Le ◽  
Hee-Chan Park ◽  
Sang-Il Choi

Gait is a characteristic that has been utilized for identifying individuals. As human gait information is now able to be captured by several types of devices, many studies have proposed biometric identification methods using gait information. As research continues, the performance of this technology in terms of identification accuracy has been improved by gathering information from multi-modal sensors. However, in past studies, gait information was collected using ancillary devices while the identification accuracy was not high enough for biometric identification. In this study, we propose a deep learning-based biometric model to identify people by their gait information collected through a wearable device, namely an insole. The identification accuracy of the proposed model when utilizing multi-modal sensing is over 99%.

Author(s):  
Tao Zhen ◽  
Lei Yan ◽  
Jian-lei Kong

Human-gait-phase-recognition is an important technology in the field of exoskeleton robot control and medical rehabilitation. Inertial sensors with accelerometers and gyroscopes are easy to wear, inexpensive and have great potential for analyzing gait dynamics. However, current deep-learning methods extract spatial and temporal features in isolation—while ignoring the inherent correlation in high-dimensional spaces—which limits the accuracy of a single model. This paper proposes an effective hybrid deep-learning framework based on the fusion of multiple spatiotemporal networks (FMS-Net), which is used to detect asynchronous phases from IMU signals. More specifically, it first uses a gait-information acquisition system to collect IMU sensor data fixed on the lower leg. Through data preprocessing, the framework constructs a spatial feature extractor with CNN module and a temporal feature extractor, combined with LSTM module. Finally, a skip-connection structure and the two-layer fully connected layer fusion module are used to achieve the final gait recognition. Experimental results show that this method has better identification accuracy than other comparative methods with the macro-F1 reaching 96.7%.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Energies ◽  
2019 ◽  
Vol 12 (18) ◽  
pp. 3429 ◽  
Author(s):  
Chu ◽  
Yuan ◽  
Hu ◽  
Pan ◽  
Pan

With increasing size and flexibility of modern grid-connected wind turbines, advanced control algorithms are urgently needed, especially for multi-degree-of-freedom control of blade pitches and sizable rotor. However, complex dynamics of wind turbines are difficult to be modeled in a simplified state-space form for advanced control design considering stability. In this paper, grey-box parameter identification of critical mechanical models is systematically studied without excitation experiment, and applicabilities of different methods are compared from views of control design. Firstly, through mechanism analysis, the Hammerstein structure is adopted for mechanical-side modeling of wind turbines. Under closed-loop control across the whole wind speed range, structural identifiability of the drive-train model is analyzed in qualitation. Then, mutual information calculation among identified variables is used to quantitatively reveal the relationship between identification accuracy and variables’ relevance. Then, the methods such as subspace identification, recursive least square identification and optimal identification are compared for a two-mass model and tower model. At last, through the high-fidelity simulation demo of a 2 MW wind turbine in the GH Bladed software, multivariable datasets are produced for studying. The results show that the Hammerstein structure is effective for simplify the modeling process where closed-loop identification of a two-mass model without excitation experiment is feasible. Meanwhile, it is found that variables’ relevance has obvious influence on identification accuracy where mutual information is a good indicator. Higher mutual information often yields better accuracy. Additionally, three identification methods have diverse performance levels, showing their application potentials for different control design algorithms. In contrast, grey-box optimal parameter identification is the most promising for advanced control design considering stability, although its simplified representation of complex mechanical dynamics needs additional dynamic compensation which will be studied in future.


Technologies ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 14
Author(s):  
James Dzisi Gadze ◽  
Akua Acheampomaa Bamfo-Asante ◽  
Justice Owusu Agyemang ◽  
Henry Nunoo-Mensah ◽  
Kwasi Adu-Boahen Opare

Software-Defined Networking (SDN) is a new paradigm that revolutionizes the idea of a software-driven network through the separation of control and data planes. It addresses the problems of traditional network architecture. Nevertheless, this brilliant architecture is exposed to several security threats, e.g., the distributed denial of service (DDoS) attack, which is hard to contain in such software-based networks. The concept of a centralized controller in SDN makes it a single point of attack as well as a single point of failure. In this paper, deep learning-based models, long-short term memory (LSTM) and convolutional neural network (CNN), are investigated. It illustrates their possibility and efficiency in being used in detecting and mitigating DDoS attack. The paper focuses on TCP, UDP, and ICMP flood attacks that target the controller. The performance of the models was evaluated based on the accuracy, recall, and true negative rate. We compared the performance of the deep learning models with classical machine learning models. We further provide details on the time taken to detect and mitigate the attack. Our results show that RNN LSTM is a viable deep learning algorithm that can be applied in the detection and mitigation of DDoS in the SDN controller. Our proposed model produced an accuracy of 89.63%, which outperformed linear-based models such as SVM (86.85%) and Naive Bayes (82.61%). Although KNN, which is a linear-based model, outperformed our proposed model (achieving an accuracy of 99.4%), our proposed model provides a good trade-off between precision and recall, which makes it suitable for DDoS classification. In addition, it was realized that the split ratio of the training and testing datasets can give different results in the performance of a deep learning algorithm used in a specific work. The model achieved the best performance when a split of 70/30 was used in comparison to 80/20 and 60/40 split ratios.


Agriculture ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 651
Author(s):  
Shengyi Zhao ◽  
Yun Peng ◽  
Jizhan Liu ◽  
Shuo Wu

Crop disease diagnosis is of great significance to crop yield and agricultural production. Deep learning methods have become the main research direction to solve the diagnosis of crop diseases. This paper proposed a deep convolutional neural network that integrates an attention mechanism, which can better adapt to the diagnosis of a variety of tomato leaf diseases. The network structure mainly includes residual blocks and attention extraction modules. The model can accurately extract complex features of various diseases. Extensive comparative experiment results show that the proposed model achieves the average identification accuracy of 96.81% on the tomato leaf diseases dataset. It proves that the model has significant advantages in terms of network complexity and real-time performance compared with other models. Moreover, through the model comparison experiment on the grape leaf diseases public dataset, the proposed model also achieves better results, and the average identification accuracy of 99.24%. It is certified that add the attention module can more accurately extract the complex features of a variety of diseases and has fewer parameters. The proposed model provides a high-performance solution for crop diagnosis under the real agricultural environment.


2021 ◽  
Vol 11 (9) ◽  
pp. 3974
Author(s):  
Laila Bashmal ◽  
Yakoub Bazi ◽  
Mohamad Mahmoud Al Rahhal ◽  
Haikel Alhichri ◽  
Naif Al Ajlan

In this paper, we present an approach for the multi-label classification of remote sensing images based on data-efficient transformers. During the training phase, we generated a second view for each image from the training set using data augmentation. Then, both the image and its augmented version were reshaped into a sequence of flattened patches and then fed to the transformer encoder. The latter extracts a compact feature representation from each image with the help of a self-attention mechanism, which can handle the global dependencies between different regions of the high-resolution aerial image. On the top of the encoder, we mounted two classifiers, a token and a distiller classifier. During training, we minimized a global loss consisting of two terms, each corresponding to one of the two classifiers. In the test phase, we considered the average of the two classifiers as the final class labels. Experiments on two datasets acquired over the cities of Trento and Civezzano with a ground resolution of two-centimeter demonstrated the effectiveness of the proposed model.


Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 702
Author(s):  
Nalee Kim ◽  
Jaehee Chun ◽  
Jee Suk Chang ◽  
Chang Geol Lee ◽  
Ki Chang Keum ◽  
...  

This study investigated the feasibility of deep learning-based segmentation (DLS) and continual training for adaptive radiotherapy (RT) of head and neck (H&N) cancer. One-hundred patients treated with definitive RT were included. Based on 23 organs-at-risk (OARs) manually segmented in initial planning computed tomography (CT), modified FC-DenseNet was trained for DLS: (i) using data obtained from 60 patients, with 20 matched patients in the test set (DLSm); (ii) using data obtained from 60 identical patients with 20 unmatched patients in the test set (DLSu). Manually contoured OARs in adaptive planning CT for independent 20 patients were provided as test sets. Deformable image registration (DIR) was also performed. All 23 OARs were compared using quantitative measurements, and nine OARs were also evaluated via subjective assessment from 26 observers using the Turing test. DLSm achieved better performance than both DLSu and DIR (mean Dice similarity coefficient; 0.83 vs. 0.80 vs. 0.70), mainly for glandular structures, whose volume significantly reduced during RT. Based on subjective measurements, DLS is often perceived as a human (49.2%). Furthermore, DLSm is preferred over DLSu (67.2%) and DIR (96.7%), with a similar rate of required revision to that of manual segmentation (28.0% vs. 29.7%). In conclusion, DLS was effective and preferred over DIR. Additionally, continual DLS training is required for an effective optimization and robustness in personalized adaptive RT.


Author(s):  
Falk Schwendicke ◽  
Akhilanand Chaurasia ◽  
Lubaina Arsiwala ◽  
Jae-Hong Lee ◽  
Karim Elhennawy ◽  
...  

Abstract Objectives Deep learning (DL) has been increasingly employed for automated landmark detection, e.g., for cephalometric purposes. We performed a systematic review and meta-analysis to assess the accuracy and underlying evidence for DL for cephalometric landmark detection on 2-D and 3-D radiographs. Methods Diagnostic accuracy studies published in 2015-2020 in Medline/Embase/IEEE/arXiv and employing DL for cephalometric landmark detection were identified and extracted by two independent reviewers. Random-effects meta-analysis, subgroup, and meta-regression were performed, and study quality was assessed using QUADAS-2. The review was registered (PROSPERO no. 227498). Data From 321 identified records, 19 studies (published 2017–2020), all employing convolutional neural networks, mainly on 2-D lateral radiographs (n=15), using data from publicly available datasets (n=12) and testing the detection of a mean of 30 (SD: 25; range.: 7–93) landmarks, were included. The reference test was established by two experts (n=11), 1 expert (n=4), 3 experts (n=3), and a set of annotators (n=1). Risk of bias was high, and applicability concerns were detected for most studies, mainly regarding the data selection and reference test conduct. Landmark prediction error centered around a 2-mm error threshold (mean; 95% confidence interval: (–0.581; 95 CI: –1.264 to 0.102 mm)). The proportion of landmarks detected within this 2-mm threshold was 0.799 (0.770 to 0.824). Conclusions DL shows relatively high accuracy for detecting landmarks on cephalometric imagery. The overall body of evidence is consistent but suffers from high risk of bias. Demonstrating robustness and generalizability of DL for landmark detection is needed. Clinical significance Existing DL models show consistent and largely high accuracy for automated detection of cephalometric landmarks. The majority of studies so far focused on 2-D imagery; data on 3-D imagery are sparse, but promising. Future studies should focus on demonstrating generalizability, robustness, and clinical usefulness of DL for this objective.


2019 ◽  
Vol 5 (1) ◽  
pp. 9-12
Author(s):  
Jyothsna Kondragunta ◽  
Christian Wiede ◽  
Gangolf Hirtz

AbstractBetter handling of neurological or neurodegenerative disorders such as Parkinson’s Disease (PD) is only possible with an early identification of relevant symptoms. Although the entire disease can’t be treated but the effects of the disease can be delayed with proper care and treatment. Due to this fact, early identification of symptoms for the PD plays a key role. Recent studies state that gait abnormalities are clearly evident while performing dual cognitive tasks by people suffering with PD. Researches also proved that the early identification of the abnormal gaits leads to the identification of PD in advance. Novel technologies provide many options for the identification and analysis of human gait. These technologies can be broadly classified as wearable and non-wearable technologies. As PD is more prominent in elderly people, wearable sensors may hinder the natural persons movement and is considered out of scope of this paper. Non-wearable technologies especially Image Processing (IP) approaches captures data of the person’s gait through optic sensors Existing IP approaches which perform gait analysis is restricted with the parameters such as angle of view, background and occlusions due to objects or due to own body movements. Till date there exists no researcher in terms of analyzing gait through 3D pose estimation. As deep leaning has proven efficient in 2D pose estimation, we propose an 3D pose estimation along with proper dataset. This paper outlines the advantages and disadvantages of the state-of-the-art methods in application of gait analysis for early PD identification. Furthermore, the importance of extracting the gait parameters from 3D pose estimation using deep learning is outlined.


Sign in / Sign up

Export Citation Format

Share Document