scholarly journals A microstructural neural network biomarker for dystonia diagnosis identified by a DystoniaNet deep learning platform

2020 ◽  
Vol 117 (42) ◽  
pp. 26398-26405 ◽  
Author(s):  
Davide Valeriani ◽  
Kristina Simonyan

Isolated dystonia is a neurological disorder of heterogeneous pathophysiology, which causes involuntary muscle contractions leading to abnormal movements and postures. Its diagnosis is remarkably challenging due to the absence of a biomarker or gold standard diagnostic test. This leads to a low agreement between clinicians, with up to 50% of cases being misdiagnosed and diagnostic delays extending up to 10.1 y. We developed a deep learning algorithmic platform, DystoniaNet, to automatically identify and validate a microstructural neural network biomarker for dystonia diagnosis from raw structural brain MRIs of 612 subjects, including 392 patients with three different forms of isolated focal dystonia and 220 healthy controls. DystoniaNet identified clusters in corpus callosum, anterior and posterior thalamic radiations, inferior fronto-occipital fasciculus, and inferior temporal and superior orbital gyri as the biomarker components. These regions are known to contribute to abnormal interhemispheric information transfer, heteromodal sensorimotor processing, and executive control of motor commands in dystonia pathophysiology. The DystoniaNet-based biomarker showed an overall accuracy of 98.8% in diagnosing dystonia, with a referral of 3.5% of cases due to diagnostic uncertainty. The diagnostic decision by DystoniaNet was computed in 0.36 s per subject. DystoniaNet significantly outperformed shallow machine-learning algorithms in benchmark comparisons, showing nearly a 20% increase in its diagnostic performance. Importantly, the microstructural neural network biomarker and its DystoniaNet platform showed substantial improvement over the current 34% agreement on dystonia diagnosis between clinicians. The translational potential of this biomarker is in its highly accurate, interpretable, and generalizable performance for enhanced clinical decision-making.

2021 ◽  
Vol 22 (Supplement_2) ◽  
Author(s):  
C Torlasco ◽  
D Papetti ◽  
R Mene ◽  
J Artico ◽  
A Seraphim ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: None. Introduction The extent of ischemic scar detected by Cardiac Magnetic Resonance (CMR) with late gadolinium enhancement (LGE) is linked with long-term prognosis, but scar quantification is time-consuming. Deep Learning (DL) approaches appear promising in CMR segmentation.  Purpose: To train and apply a deep learning approach to dark blood (DB) CMR-LGE for ischemic scar segmentation, comparing results to 4-Standard Deviation (4-SD) semi-automated method. Methods: We trained and validated a dual neural network infrastructure on a dataset of DB-LGE short-axis stacks, acquired at 1.5T from 33 patients with ischemic scar. The DL architectures were an evolution of the U-Net Convolutional Neural Network (CNN), using data augmentation to increase generalization. The CNNs worked together to identify and segment 1) the myocardium and 2) areas of LGE. The first CNN simultaneously cropped the region of interest (RoI) according to the bounding box of the heart and calculated the area of myocardium. The cropped RoI was then processed by the second CNN, which identified the overall LGE area. The extent of scar was calculated as the ratio of the two areas. For comparison, endo- and epi-cardial borders were manually contoured and scars segmented by a 4-SD technique with a validated software. Results: The two U-Net networks were implemented with two free and open-source software library for machine learning. We performed 5-fold cross-validation over a dataset of 108 and 385 labelled CMR images of the myocardium and scar, respectively. We obtained high performance (> ∼0.85) as measured by the Intersection over Union metric (IoU) on the training sets, in the case of scar segmentation. With regards to heart recognition, the performance was lower (> ∼0.7), although improved (∼ 0.75) by detecting the cardiac area instead of heart boundaries. On the validation set, performances oscillated between 0.8 and 0.85 for scar tissue recognition, and dropped to ∼0.7 for myocardium segmentation. We believe that underrepresented samples and noise might be affecting the overall performances, so that additional data might be beneficial. Figure1: examples of heart segmentation (upper left panel: training; upper right panel: validation) and of scar segmentation (lower left panel: training; lower right panel: validation). Conclusion: Our CNNs show promising results in automatically segmenting LV and quantify ischemic scars on DB-LGE-CMR images. The performances of our method can further improve by expanding the data set used for the training. If implemented in a clinical routine, this process can speed up the CMR analysis process and aid in the clinical decision-making. Abstract Figure.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Steven A. Hicks ◽  
Jonas L. Isaksen ◽  
Vajira Thambawita ◽  
Jonas Ghouse ◽  
Gustav Ahlberg ◽  
...  

AbstractDeep learning-based tools may annotate and interpret medical data more quickly, consistently, and accurately than medical doctors. However, as medical doctors are ultimately responsible for clinical decision-making, any deep learning-based prediction should be accompanied by an explanation that a human can understand. We present an approach called electrocardiogram gradient class activation map (ECGradCAM), which is used to generate attention maps and explain the reasoning behind deep learning-based decision-making in ECG analysis. Attention maps may be used in the clinic to aid diagnosis, discover new medical knowledge, and identify novel features and characteristics of medical tests. In this paper, we showcase how ECGradCAM attention maps can unmask how a novel deep learning model measures both amplitudes and intervals in 12-lead electrocardiograms, and we show an example of how attention maps may be used to develop novel ECG features.


2021 ◽  
Vol 28 (1) ◽  
pp. e100251
Author(s):  
Ian Scott ◽  
Stacey Carter ◽  
Enrico Coiera

Machine learning algorithms are being used to screen and diagnose disease, prognosticate and predict therapeutic responses. Hundreds of new algorithms are being developed, but whether they improve clinical decision making and patient outcomes remains uncertain. If clinicians are to use algorithms, they need to be reassured that key issues relating to their validity, utility, feasibility, safety and ethical use have been addressed. We propose a checklist of 10 questions that clinicians can ask of those advocating for the use of a particular algorithm, but which do not expect clinicians, as non-experts, to demonstrate mastery over what can be highly complex statistical and computational concepts. The questions are: (1) What is the purpose and context of the algorithm? (2) How good were the data used to train the algorithm? (3) Were there sufficient data to train the algorithm? (4) How well does the algorithm perform? (5) Is the algorithm transferable to new clinical settings? (6) Are the outputs of the algorithm clinically intelligible? (7) How will this algorithm fit into and complement current workflows? (8) Has use of the algorithm been shown to improve patient care and outcomes? (9) Could the algorithm cause patient harm? and (10) Does use of the algorithm raise ethical, legal or social concerns? We provide examples where an algorithm may raise concerns and apply the checklist to a recent review of diagnostic imaging applications. This checklist aims to assist clinicians in assessing algorithm readiness for routine care and identify situations where further refinement and evaluation is required prior to large-scale use.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Imogen Schofield ◽  
David C. Brodbelt ◽  
Noel Kennedy ◽  
Stijn J. M. Niessen ◽  
David B. Church ◽  
...  

AbstractCushing’s syndrome is an endocrine disease in dogs that negatively impacts upon the quality-of-life of affected animals. Cushing’s syndrome can be a challenging diagnosis to confirm, therefore new methods to aid diagnosis are warranted. Four machine-learning algorithms were applied to predict a future diagnosis of Cushing's syndrome, using structured clinical data from the VetCompass programme in the UK. Dogs suspected of having Cushing's syndrome were included in the analysis and classified based on their final reported diagnosis within their clinical records. Demographic and clinical features available at the point of first suspicion by the attending veterinarian were included within the models. The machine-learning methods were able to classify the recorded Cushing’s syndrome diagnoses, with good predictive performance. The LASSO penalised regression model indicated the best overall performance when applied to the test set with an AUROC = 0.85 (95% CI 0.80–0.89), sensitivity = 0.71, specificity = 0.82, PPV = 0.75 and NPV = 0.78. The findings of our study indicate that machine-learning methods could predict the future diagnosis of a practicing veterinarian. New approaches using these methods could support clinical decision-making and contribute to improved diagnosis of Cushing’s syndrome in dogs.


2018 ◽  
Vol 16 (1) ◽  
Author(s):  
David Benrimoh ◽  
Robert Fratila ◽  
Sonia Israel ◽  
Kelly Perlman

Globally, depression affects 300 million people and is projected be the leading cause of disability by 2030. While different patients are known to benefit from different therapies, there is no principled way for clinicians to predict individual patient responses or side effect profiles. A form of machine learning based on artificial neural networks, deep learning, might be useful for generating a predictive model that could aid in clinical decision making. Such a model’s primary outcomes would be to help clinicians select the most effective treatment plans and mitigate adverse side effects, allowing doctors to provide greater personalized care to a larger number of patients. In this commentary, we discuss the need for personalization of depression treatment and how a deep learning model might be used to construct a clinical decision aid.


2022 ◽  
pp. 1559-1575
Author(s):  
Mário Pereira Véstias

Machine learning is the study of algorithms and models for computing systems to do tasks based on pattern identification and inference. When it is difficult or infeasible to develop an algorithm to do a particular task, machine learning algorithms can provide an output based on previous training data. A well-known machine learning model is deep learning. The most recent deep learning models are based on artificial neural networks (ANN). There exist several types of artificial neural networks including the feedforward neural network, the Kohonen self-organizing neural network, the recurrent neural network, the convolutional neural network, the modular neural network, among others. This article focuses on convolutional neural networks with a description of the model, the training and inference processes and its applicability. It will also give an overview of the most used CNN models and what to expect from the next generation of CNN models.


Author(s):  
Nirmal Yadav

Applying machine learning in life sciences, especially diagnostics, has become a key area of focus for researchers. Combining machine learning with traditional algorithms provides a unique opportunity of providing better solutions for the patients. In this paper, we present study results of applying the Ridgelet Transform method on retina images to enhance the blood vessels, then using machine learning algorithms to identify cases of Diabetic Retinopathy (DR). The Ridgelet transform provides better results for line singularity of image function and, thus, helps to reduce artefacts along the edges of the image. The Ridgelet Transform method, when compared with earlier known methods of image enhancement, such as Wavelet Transform and Contourlet Transform, provided satisfactory results. The transformed image using the Ridgelet Transform method with pre-processing quantifies the amount of information in the dataset. It efficiently enhances the generation of features vectors in the convolution neural network (CNN). In this study, a sample of fundus photographs was processed, which was obtained from a publicly available dataset. In pre-processing, first, CLAHE was applied, followed by filtering and application of Ridgelet transform on the patches to improve the quality of the image. Then, this processed image was used for statistical feature detection and classified by deep learning method to detect DR images from the dataset. The successful classification ratio was 98.61%. This result concludes that the transformed image of fundus using the Ridgelet Transform enables better detection by leveraging a transform-based algorithm and the deep learning.


2020 ◽  
Vol 39 (5) ◽  
pp. 7931-7952
Author(s):  
Gaurav Tripathi ◽  
Kuldeep Singh ◽  
Dinesh Kumar Vishwakarma

Violence detection is a challenging task in the computer vision domain. Violence detection framework depends upon the detection of crowd behaviour changes. Violence erupts due to disagreement of an idea, injustice or severe disagreement. The aim of any country is to maintain law and order and peace in the area. Violence detection thus becomes an important task for authorities to maintain peace. Traditional methods have existed for violence detection which are heavily dependent upon hand crafted features. The world is now transitioning in to Artificial Intelligence based techniques. Automatic feature extraction and its classification from images and videos is the new norm in surveillance domain. Deep learning platform has provided us the platter on which non-linear features can be extracted, self-learnt and classified as per the appropriate tool. One such tool is the Convolutional Neural Networks, also known as ConvNets, which has the ability to automatically extract features and classify them in to their respective domain. Till date there is no survey of deciphering violence behaviour techniques using ConvNets. We hope that this survey becomes an exclusive baseline for future violence detection and analysis in the deep learning domain.


Electronics ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. 1253
Author(s):  
Muhammad Afzal ◽  
Beom Joo Park ◽  
Maqbool Hussain ◽  
Sungyoung Lee

A major blockade to support the evidence-based clinical decision-making is accurately and efficiently recognizing appropriate and scientifically rigorous studies in the biomedical literature. We trained a multi-layer perceptron (MLP) model on a dataset with two textual features, title and abstract. The dataset consisting of 7958 PubMed citations classified in two classes: scientific rigor and non-rigor, is used to train the proposed model. We compare our model with other promising machine learning models such as Support Vector Machine (SVM), Decision Tree, Random Forest, and Gradient Boosted Tree (GBT) approaches. Based on the higher cumulative score, deep learning was chosen and was tested on test datasets obtained by running a set of domain-specific queries. On the training dataset, the proposed deep learning model obtained significantly higher accuracy and AUC of 97.3% and 0.993, respectively, than the competitors, but was slightly lower in the recall of 95.1% as compared to GBT. The trained model sustained the performance of testing datasets. Unlike previous approaches, the proposed model does not require a human expert to create fresh annotated data; instead, we used studies cited in Cochrane reviews as a surrogate for quality studies in a clinical topic. We learn that deep learning methods are beneficial to use for biomedical literature classification. Not only do such methods minimize the workload in feature engineering, but they also show better performance on large and noisy data.


Sign in / Sign up

Export Citation Format

Share Document