scholarly journals Interpretable deep learning for the remote characterisation of ambulation in multiple sclerosis using smartphones

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew P. Creagh ◽  
Florian Lipsmeier ◽  
Michael Lindemann ◽  
Maarten De Vos

AbstractThe emergence of digital technologies such as smartphones in healthcare applications have demonstrated the possibility of developing rich, continuous, and objective measures of multiple sclerosis (MS) disability that can be administered remotely and out-of-clinic. Deep Convolutional Neural Networks (DCNN) may capture a richer representation of healthy and MS-related ambulatory characteristics from the raw smartphone-based inertial sensor data than standard feature-based methodologies. To overcome the typical limitations associated with remotely generated health data, such as low subject numbers, sparsity, and heterogeneous data, a transfer learning (TL) model from similar large open-source datasets was proposed. Our TL framework leveraged the ambulatory information learned on human activity recognition (HAR) tasks collected from wearable smartphone sensor data. It was demonstrated that fine-tuning TL DCNN HAR models towards MS disease recognition tasks outperformed previous Support Vector Machine (SVM) feature-based methods, as well as DCNN models trained end-to-end, by upwards of 8–15%. A lack of transparency of “black-box” deep networks remains one of the largest stumbling blocks to the wider acceptance of deep learning for clinical applications. Ensuing work therefore aimed to visualise DCNN decisions attributed by relevance heatmaps using Layer-Wise Relevance Propagation (LRP). Through the LRP framework, the patterns captured from smartphone-based inertial sensor data that were reflective of those who are healthy versus people with MS (PwMS) could begin to be established and understood. Interpretations suggested that cadence-based measures, gait speed, and ambulation-related signal perturbations were distinct characteristics that distinguished MS disability from healthy participants. Robust and interpretable outcomes, generated from high-frequency out-of-clinic assessments, could greatly augment the current in-clinic assessment picture for PwMS, to inform better disease management techniques, and enable the development of better therapeutic interventions.

2021 ◽  
Vol 11 (12) ◽  
pp. 5644
Author(s):  
Ivana Marin ◽  
Saša Mladenović ◽  
Sven Gotovac ◽  
Goran Zaharija

The global community has recognized an increasing amount of pollutants entering oceans and other water bodies as a severe environmental, economic, and social issue. In addition to prevention, one of the key measures in addressing marine pollution is the cleanup of debris already present in marine environments. Deployment of machine learning (ML) and deep learning (DL) techniques can automate marine waste removal, making the cleanup process more efficient. This study examines the performance of six well-known deep convolutional neural networks (CNNs), namely VGG19, InceptionV3, ResNet50, Inception-ResNetV2, DenseNet121, and MobileNetV2, utilized as feature extractors according to three different extraction schemes for the identification and classification of underwater marine debris. We compare the performance of a neural network (NN) classifier trained on top of deep CNN feature extractors when the feature extractor is (1) fixed; (2) fine-tuned on the given task; (3) fixed during the first phase of training and fine-tuned afterward. In general, fine-tuning resulted in better-performing models but is much more computationally expensive. The overall best NN performance showed the fine-tuned Inception-ResNetV2 feature extractor with an accuracy of 91.40% and F1-score 92.08%, followed by fine-tuned InceptionV3 extractor. Furthermore, we analyze conventional ML classifiers’ performance when trained on features extracted with deep CNNs. Finally, we show that replacing NN with a conventional ML classifier, such as support vector machine (SVM) or logistic regression (LR), can further enhance the classification performance on new data.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4669
Author(s):  
Muhammad Awais ◽  
Lorenzo Chiari ◽  
Espen A. F. Ihlen ◽  
Jorunn L. Helbostad ◽  
Luca Palmerini

Physical activity has a strong influence on mental and physical health and is essential in healthy ageing and wellbeing for the ever-growing elderly population. Wearable sensors can provide a reliable and economical measure of activities of daily living (ADLs) by capturing movements through, e.g., accelerometers and gyroscopes. This study explores the potential of using classical machine learning and deep learning approaches to classify the most common ADLs: walking, sitting, standing, and lying. We validate the results on the ADAPT dataset, the most detailed dataset to date of inertial sensor data, synchronised with high frame-rate video labelled data recorded in a free-living environment from older adults living independently. The findings suggest that both approaches can accurately classify ADLs, showing high potential in profiling ADL patterns of the elderly population in free-living conditions. In particular, both long short-term memory (LSTM) networks and Support Vector Machines combined with ReliefF feature selection performed equally well, achieving around 97% F-score in profiling ADLs.


2016 ◽  
Vol 27 (02) ◽  
pp. 1650039 ◽  
Author(s):  
Francesco Carlo Morabito ◽  
Maurizio Campolo ◽  
Nadia Mammone ◽  
Mario Versaci ◽  
Silvana Franceschetti ◽  
...  

A novel technique of quantitative EEG for differentiating patients with early-stage Creutzfeldt–Jakob disease (CJD) from other forms of rapidly progressive dementia (RPD) is proposed. The discrimination is based on the extraction of suitable features from the time-frequency representation of the EEG signals through continuous wavelet transform (CWT). An average measure of complexity of the EEG signal obtained by permutation entropy (PE) is also included. The dimensionality of the feature space is reduced through a multilayer processing system based on the recently emerged deep learning (DL) concept. The DL processor includes a stacked auto-encoder, trained by unsupervised learning techniques, and a classifier whose parameters are determined in a supervised way by associating the known category labels to the reduced vector of high-level features generated by the previous processing blocks. The supervised learning step is carried out by using either support vector machines (SVM) or multilayer neural networks (MLP-NN). A subset of EEG from patients suffering from Alzheimer’s Disease (AD) and healthy controls (HC) is considered for differentiating CJD patients. When fine-tuning the parameters of the global processing system by a supervised learning procedure, the proposed system is able to achieve an average accuracy of 89%, an average sensitivity of 92%, and an average specificity of 89% in differentiating CJD from RPD. Similar results are obtained for CJD versus AD and CJD versus HC.


2022 ◽  
Vol 14 (2) ◽  
pp. 274
Author(s):  
Mohamed Marzhar Anuar ◽  
Alfian Abdul Halin ◽  
Thinagaran Perumal ◽  
Bahareh Kalantar

In recent years complex food security issues caused by climatic changes, limitations in human labour, and increasing production costs require a strategic approach in addressing problems. The emergence of artificial intelligence due to the capability of recent advances in computing architectures could become a new alternative to existing solutions. Deep learning algorithms in computer vision for image classification and object detection can facilitate the agriculture industry, especially in paddy cultivation, to alleviate human efforts in laborious, burdensome, and repetitive tasks. Optimal planting density is a crucial factor for paddy cultivation as it will influence the quality and quantity of production. There have been several studies involving planting density using computer vision and remote sensing approaches. While most of the studies have shown promising results, they have disadvantages and show room for improvement. One of the disadvantages is that the studies aim to detect and count all the paddy seedlings to determine planting density. The defective paddy seedlings’ locations are not pointed out to help farmers during the sowing process. In this work we aimed to explore several deep convolutional neural networks (DCNN) models to determine which one performs the best for defective paddy seedling detection using aerial imagery. Thus, we evaluated the accuracy, robustness, and inference latency of one- and two-stage pretrained object detectors combined with state-of-the-art feature extractors such as EfficientNet, ResNet50, and MobilenetV2 as a backbone. We also investigated the effect of transfer learning with fine-tuning on the performance of the aforementioned pretrained models. Experimental results showed that our proposed methods were capable of detecting the defective paddy rice seedlings with the highest precision and an F1-Score of 0.83 and 0.77, respectively, using a one-stage pretrained object detector called EfficientDet-D1 EficientNet.


2020 ◽  
Vol 309 ◽  
pp. 03037
Author(s):  
Dongqiu Xing ◽  
Rui Chen ◽  
Lihua Qi ◽  
Jing Zhao ◽  
Yi Wang

This study establishes a multi-source fault identification method based on a combined deep learning strategy to identify a multi-source fault effectively in the fault diagnosis of complex industrial systems. This framework is composed of feature extraction and classifier design. In the first state, the signal is transformed to the time-frequency domain and the time-frequency feature is learned using stacked denoising autoencoders. A learning method that consists of unsupervised pre-learning and supervised fine-tuning is used to train this deep model. In the second state, a model for an ensemble multiple support vector machine classifier is created to recognize fault information. Ten types of rolling bearing signals were adopted in a simulation experiment to validate the effectiveness of the proposed framework. The results demonstrate that the joint model helps to obtain higher recognition accuracy.


2020 ◽  
Vol 10 (4) ◽  
pp. 213 ◽  
Author(s):  
Ki-Sun Lee ◽  
Jae Young Kim ◽  
Eun-tae Jeon ◽  
Won Suk Choi ◽  
Nan Hee Kim ◽  
...  

According to recent studies, patients with COVID-19 have different feature characteristics on chest X-ray (CXR) than those with other lung diseases. This study aimed at evaluating the layer depths and degree of fine-tuning on transfer learning with a deep convolutional neural network (CNN)-based COVID-19 screening in CXR to identify efficient transfer learning strategies. The CXR images used in this study were collected from publicly available repositories, and the collected images were classified into three classes: COVID-19, pneumonia, and normal. To evaluate the effect of layer depths of the same CNN architecture, CNNs called VGG-16 and VGG-19 were used as backbone networks. Then, each backbone network was trained with different degrees of fine-tuning and comparatively evaluated. The experimental results showed the highest AUC value to be 0.950 concerning COVID-19 classification in the experimental group of a fine-tuned with only 2/5 blocks of the VGG16 backbone network. In conclusion, in the classification of medical images with a limited number of data, a deeper layer depth may not guarantee better results. In addition, even if the same pre-trained CNN architecture is used, an appropriate degree of fine-tuning can help to build an efficient deep learning model.


MIS Quarterly ◽  
2021 ◽  
Vol 45 (2) ◽  
pp. 859-896
Author(s):  
Hongyi Zhu ◽  
Sagar Samtani ◽  
Randall Brown ◽  
Hsinchun Chen

Ensuring the health and safety of senior citizens who live alone is a growing societal concern. The Activity of Daily Living (ADL) approach is a common means to monitor disease progression and the ability of these individuals to care for themselves. However, the prevailing sensor-based ADL monitoring systems primarily rely on wearable motion sensors, capture insufficient information for accurate ADL recognition, and do not provide a comprehensive understanding of ADLs at different granularities. Current healthcare IS and mobile analytics research focuses on studying the system, device, and provided services, and is in need of an end-to-end solution to comprehensively recognize ADLs based on mobile sensor data. This study adopts the design science paradigm and employs advanced deep learning algorithms to develop a novel hierarchical, multiphase ADL recognition framework to model ADLs at different granularities. We propose a novel 2D interaction kernel for convolutional neural networks to leverage interactions between human and object motion sensors. We rigorously evaluate each proposed module and the entire framework against state-of-the-art benchmarks (e.g., support vector machines, DeepConvLSTM, hidden Markov models, and topic-modeling-based ADLR) on two real-life motion sensor datasets that consist of ADLs at varying granularities: Opportunity and INTER. Results and a case study demonstrate that our framework can recognize ADLs at different levels more accurately. We discuss how stakeholders can further benefit from our proposed framework. Beyond demonstrating practical utility, we discuss contributions to the IS knowledge base for future design science-based cybersecurity, healthcare, and mobile analytics applications.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3363 ◽  
Author(s):  
Taylor Mauldin ◽  
Marc Canby ◽  
Vangelis Metsis ◽  
Anne Ngu ◽  
Coralys Rivera

This paper presents SmartFall, an Android app that uses accelerometer data collected from a commodity-based smartwatch Internet of Things (IoT) device to detect falls. The smartwatch is paired with a smartphone that runs the SmartFall application, which performs the computation necessary for the prediction of falls in real time without incurring latency in communicating with a cloud server, while also preserving data privacy. We experimented with both traditional (Support Vector Machine and Naive Bayes) and non-traditional (Deep Learning) machine learning algorithms for the creation of fall detection models using three different fall datasets (Smartwatch, Notch, Farseeing). Our results show that a Deep Learning model for fall detection generally outperforms more traditional models across the three datasets. This is attributed to the Deep Learning model’s ability to automatically learn subtle features from the raw accelerometer data that are not available to Naive Bayes and Support Vector Machine, which are restricted to learning from a small set of extracted features manually specified. Furthermore, the Deep Learning model exhibits a better ability to generalize to new users when predicting falls, an important quality of any model that is to be successful in the real world. We also present a three-layer open IoT system architecture used in SmartFall, which can be easily adapted for the collection and analysis of other sensor data modalities (e.g., heart rate, skin temperature, walking patterns) that enables remote monitoring of a subject’s wellbeing.


Information ◽  
2020 ◽  
Vol 11 (9) ◽  
pp. 416 ◽  
Author(s):  
Lei Chen ◽  
Shurui Fan ◽  
Vikram Kumar ◽  
Yating Jia

Human activity recognition (HAR) has been increasingly used in medical care, behavior analysis, and entertainment industry to improve the experience of users. Most of the existing works use fixed models to identify various activities. However, they do not adapt well to the dynamic nature of human activities. We investigated the activity recognition with postural transition awareness. The inertial sensor data was processed by filters and we used both time domain and frequency domain of the signals to extract the feature set. For the corresponding posture classification, three feature selection algorithms were considered to select 585 features to obtain the optimal feature subset for the posture classification. And We adopted three classifiers (support vector machine, decision tree, and random forest) for comparative analysis. After experiments, the support vector machine gave better classification results than other two methods. By using the support vector machine, we could achieve up to 98% accuracy in the Multi-class classification. Finally, the results were verified by probability estimation.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Xiaojing Fan ◽  
Xiye Wang ◽  
Mingyang Jiang ◽  
Zhili Pei ◽  
Shicheng Qiao

Naru3 (NR) is a traditional Mongolian medicine with high clinical efficacy and low incidence of side effects. Metabolomics is an approach that can facilitate the development of traditional drugs. However, metabolomic data have a high throughput, sparse, high-dimensional, and small sample nature, and their classification is challenging. Although deep learning methods have a wide range of applications, deep learning-based metabolomic studies have not been widely performed. We aimed to develop an improved stacked autoencoder (SAE) for metabolomic data classification. We established an NR-treated rheumatoid arthritis (RA) mouse model and classified the obtained metabolomic data using the Hessian-free SAE (HF-SAE) algorithm. During training, the unlabeled data were used for pretraining, and the labeled data were used for fine-tuning based on the HF algorithm for gradient descent optimization. The hybrid algorithm successfully classified the data. The results were compared with those of the support vector machine (SVM), k-nearest neighbor (KNN), and gradient descent SAE (GD-SAE) algorithms. A five-fold cross-validation was used to complete the classification experiment. In each fine-tuning process, the mean square error (MSE) and misclassification rates of the training and test data were recorded. We successfully established an NR animal model and an improved SAE for metabolomic data classification.


Sign in / Sign up

Export Citation Format

Share Document