scholarly journals Deep learning detection of informative features in tau PET for Alzheimer’s disease classification

2020 ◽  
Author(s):  
Taeho Jo ◽  
Kwangsik Nho ◽  
Shannon L. Risacher ◽  
Andrew J. Saykin ◽  

AbstractBackgroundAlzheimer’s disease (AD) is the most common type of dementia, typically characterized by memory loss followed by progressive cognitive decline and functional impairment. Many clinical trials of potential therapies for AD have failed, and there is currently no approved disease-modifying treatment. Biomarkers for early detection and mechanistic understanding of disease course are critical for drug development and clinical trials. Amyloid has been the focus of most biomarker research. Here, we developed a deep learning-based framework to identify informative features for AD classification using tau positron emission tomography (PET) scans.MethodsWe analysed [18F]flortaucipir PET image data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) cohort. We first developed an image classifier to distinguish AD from cognitively normal (CN) older adults by training a 3D convolutional neural network (CNN)-based deep learning model on tau PET images (N=132; 66 CN and 66 AD), then applied the classifier to images from individuals with mild cognitive impairment (MCI; N=168). In addition, we applied a layer-wise relevance propagation (LRP)-based model to identify informative features and to visualize classification results. We compared these results with those from whole brain voxel-wise between-group analysis using conventional Statistical Parametric Mapping (SPM12).ResultsThe 3D CNN-based classification model of AD from CN yielded an average accuracy of 90.8% based on five-fold cross-validation. The LRP model identified the brain regions in tau PET images that contributed most to the AD classification from CN. The top identified regions included the hippocampus, parahippocampus, thalamus, and fusiform. The LRP results were consistent with those from the voxel-wise analysis in SPM12, showing significant focal AD associated regional tau deposition in the bilateral temporal lobes including the entorhinal cortex. The AD probability scores calculated by the classifier were correlated with brain tau deposition in the medial temporal lobe in MCI participants (r=0.43 for early MCI and r=0.49 for late MCI).ConclusionA deep learning framework combining 3D CNN and LRP algorithms can be used with tau PET images to identify informative features for AD classification and may have application for early detection during prodromal stages of AD.

2020 ◽  
Vol 21 (S21) ◽  
Author(s):  
Taeho Jo ◽  
◽  
Kwangsik Nho ◽  
Shannon L. Risacher ◽  
Andrew J. Saykin

Abstract Background Alzheimer’s disease (AD) is the most common type of dementia, typically characterized by memory loss followed by progressive cognitive decline and functional impairment. Many clinical trials of potential therapies for AD have failed, and there is currently no approved disease-modifying treatment. Biomarkers for early detection and mechanistic understanding of disease course are critical for drug development and clinical trials. Amyloid has been the focus of most biomarker research. Here, we developed a deep learning-based framework to identify informative features for AD classification using tau positron emission tomography (PET) scans. Results The 3D convolutional neural network (CNN)-based classification model of AD from cognitively normal (CN) yielded an average accuracy of 90.8% based on five-fold cross-validation. The LRP model identified the brain regions in tau PET images that contributed most to the AD classification from CN. The top identified regions included the hippocampus, parahippocampus, thalamus, and fusiform. The layer-wise relevance propagation (LRP) results were consistent with those from the voxel-wise analysis in SPM12, showing significant focal AD associated regional tau deposition in the bilateral temporal lobes including the entorhinal cortex. The AD probability scores calculated by the classifier were correlated with brain tau deposition in the medial temporal lobe in MCI participants (r = 0.43 for early MCI and r = 0.49 for late MCI). Conclusion A deep learning framework combining 3D CNN and LRP algorithms can be used with tau PET images to identify informative features for AD classification and may have application for early detection during prodromal stages of AD.


2020 ◽  
Vol 30 (06) ◽  
pp. 2050032
Author(s):  
Wei Feng ◽  
Nicholas Van Halm-Lutterodt ◽  
Hao Tang ◽  
Andrew Mecum ◽  
Mohamed Kamal Mesregah ◽  
...  

In the context of neuro-pathological disorders, neuroimaging has been widely accepted as a clinical tool for diagnosing patients with Alzheimer’s disease (AD) and mild cognitive impairment (MCI). The advanced deep learning method, a novel brain imaging technique, was applied in this study to evaluate its contribution to improving the diagnostic accuracy of AD. Three-dimensional convolutional neural networks (3D-CNNs) were applied with magnetic resonance imaging (MRI) to execute binary and ternary disease classification models. The dataset from the Alzheimer’s disease neuroimaging initiative (ADNI) was used to compare the deep learning performances across 3D-CNN, 3D-CNN-support vector machine (SVM) and two-dimensional (2D)-CNN models. The outcomes of accuracy with ternary classification for 2D-CNN, 3D-CNN and 3D-CNN-SVM were [Formula: see text]%, [Formula: see text]% and [Formula: see text]% respectively. The 3D-CNN-SVM yielded a ternary classification accuracy of 93.71%, 96.82% and 96.73% for NC, MCI and AD diagnoses, respectively. Furthermore, 3D-CNN-SVM showed the best performance for binary classification. Our study indicated that ‘NC versus MCI’ showed accuracy, sensitivity and specificity of 98.90%, 98.90% and 98.80%; ‘NC versus AD’ showed accuracy, sensitivity and specificity of 99.10%, 99.80% and 98.40%; and ‘MCI versus AD’ showed accuracy, sensitivity and specificity of 89.40%, 86.70% and 84.00%, respectively. This study clearly demonstrates that 3D-CNN-SVM yields better performance with MRI compared to currently utilized deep learning methods. In addition, 3D-CNN-SVM proved to be efficient without having to manually perform any prior feature extraction and is totally independent of the variability of imaging protocols and scanners. This suggests that it can potentially be exploited by untrained operators and extended to virtual patient imaging data. Furthermore, owing to the safety, noninvasiveness and nonirradiative properties of the MRI modality, 3D-CNN-SMV may serve as an effective screening option for AD in the general population. This study holds value in distinguishing AD and MCI subjects from normal controls and to improve value-based care of patients in clinical practice.


2017 ◽  
Vol 107 ◽  
pp. 85-104
Author(s):  
Raju Anitha ◽  
S. Jyothi ◽  
Venkata Naresh Mandhala ◽  
Debnath Bhattacharyya ◽  
Tai-hoon Kim

2021 ◽  
Vol 17 (S9) ◽  
Author(s):  
Guoqiao Wang ◽  
Yan Li ◽  
Chengjie Xiong ◽  
Tammie L.S. Benzinger ◽  
Brian A. Gordon ◽  
...  

2020 ◽  
Author(s):  
Bin Lu ◽  
Hui-Xian Li ◽  
Zhi-Kai Chang ◽  
Le Li ◽  
Ning-Xuan Chen ◽  
...  

AbstractBeyond detecting brain damage or tumors, little success has been attained on identifying individual differences and brain disorders with magnetic resonance imaging (MRI). Here, we sought to build industrial-grade brain imaging-based classifiers to infer two types of such inter-individual differences: sex and Alzheimer’s disease (AD), using deep learning/transfer learning on big data. We pooled brain structural data from 217 sites/scanners to constitute the largest brain MRI sample to date (85,721 samples from 50,876 participants), and applied a state-of-the-art deep convolutional neural network, Inception-ResNet-V2, to build a sex classifier with high generalizability. In cross-dataset-validation, the sex classification model was able to classify the sex of any participant with brain structural imaging data from any scanner with 94.9% accuracy. We then applied transfer learning based on this model to objectively diagnose AD, achieving 88.4% accuracy in cross-site-validation on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset and 91.2% / 86.1% accuracy for a direct test on two unseen independent datasets (AIBL / OASIS). Directly testing this AD classifier on brain images of unseen mild cognitive impairment (MCI) patients, the model correctly predicted 63.2% who eventually converted into AD, versus predicting 22.1% as AD who did not convert into AD during follow-up. Predicted scores of the AD classifier correlated significantly with illness severity. By contrast, the transfer learning framework was unable to achieve practical accuracy for psychiatric disorders. To improve interpretability of the deep learning models, occlusion tests revealed that hypothalamus, superior vermis, thalamus, amygdala and limbic system areas were critical for predicting sex; hippocampus, parahippocampal gyrus, putamen and insula played key roles in predicting AD. Our trained model, code, preprocessed data and an online prediction website have been openly-shared to advance the clinical utility of brain imaging.


Sign in / Sign up

Export Citation Format

Share Document