Abstract
Objective Alzheimer's disease (AD), a common disease of the elderly with unknown etiology, has been bothering many people, especially with the aging of the population and the younger trend of this disease. Current AI methods based on individual information or magnetic resonance imaging (MRI) can solve the problem of diagnostic sensitivity and specificity, but still face the challenges of interpretability and clinical feasibility. In this study, we propose an interpretable multimodal deep reinforcement learning model for inferring pathological features and diagnosis of Alzheimer's disease. Approach First, for better clinical feasibility, the compressed-sensing MRI image is reconstructed by an interpretable deep reinforcement learning model. Then, the reconstructed MRI is input into the full convolution neural network to generate a pixel-level disease probability of risk map (DPM) of the whole brain for Alzheimer's disease. Finally, the DPM of important brain regions and individual information are input into the attention-based fully deep neural network to obtain the diagnosis results and analyze the biomarkers. 1349 multi-center samples were used to construct and test the model. Main Results Finally, the model obtained 99.6%±0.2, 97.9%±0.2, and 96.1%±0.3 area under curve (AUC) in ADNI, AIBL, and NACC, respectively. The model also provides an effective analysis of multimodal pathology and predicts the imaging biomarkers on MRI and the weight of each individual information. In this study, a deep reinforcement learning model was designed, which can not only accurately diagnose AD, but also analyze potential biomarkers. Significance In this study, a deep reinforcement learning model was designed. The model builds a bridge between clinical practice and artificial intelligence diagnosis and provides a viewpoint for the interpretability of artificial intelligence technology.