Deep Learning-based Parkinson disease Classification using PET Scan Imaging Data

Author(s):  
Hetav Modi ◽  
Jigna Hathaliya ◽  
Mohammad S. Obaidiat ◽  
Rajesh Gupta ◽  
Sudeep Tanwar
2020 ◽  
Vol 30 (06) ◽  
pp. 2050032
Author(s):  
Wei Feng ◽  
Nicholas Van Halm-Lutterodt ◽  
Hao Tang ◽  
Andrew Mecum ◽  
Mohamed Kamal Mesregah ◽  
...  

In the context of neuro-pathological disorders, neuroimaging has been widely accepted as a clinical tool for diagnosing patients with Alzheimer’s disease (AD) and mild cognitive impairment (MCI). The advanced deep learning method, a novel brain imaging technique, was applied in this study to evaluate its contribution to improving the diagnostic accuracy of AD. Three-dimensional convolutional neural networks (3D-CNNs) were applied with magnetic resonance imaging (MRI) to execute binary and ternary disease classification models. The dataset from the Alzheimer’s disease neuroimaging initiative (ADNI) was used to compare the deep learning performances across 3D-CNN, 3D-CNN-support vector machine (SVM) and two-dimensional (2D)-CNN models. The outcomes of accuracy with ternary classification for 2D-CNN, 3D-CNN and 3D-CNN-SVM were [Formula: see text]%, [Formula: see text]% and [Formula: see text]% respectively. The 3D-CNN-SVM yielded a ternary classification accuracy of 93.71%, 96.82% and 96.73% for NC, MCI and AD diagnoses, respectively. Furthermore, 3D-CNN-SVM showed the best performance for binary classification. Our study indicated that ‘NC versus MCI’ showed accuracy, sensitivity and specificity of 98.90%, 98.90% and 98.80%; ‘NC versus AD’ showed accuracy, sensitivity and specificity of 99.10%, 99.80% and 98.40%; and ‘MCI versus AD’ showed accuracy, sensitivity and specificity of 89.40%, 86.70% and 84.00%, respectively. This study clearly demonstrates that 3D-CNN-SVM yields better performance with MRI compared to currently utilized deep learning methods. In addition, 3D-CNN-SVM proved to be efficient without having to manually perform any prior feature extraction and is totally independent of the variability of imaging protocols and scanners. This suggests that it can potentially be exploited by untrained operators and extended to virtual patient imaging data. Furthermore, owing to the safety, noninvasiveness and nonirradiative properties of the MRI modality, 3D-CNN-SMV may serve as an effective screening option for AD in the general population. This study holds value in distinguishing AD and MCI subjects from normal controls and to improve value-based care of patients in clinical practice.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
D Di Vece ◽  
F Laumer ◽  
M Schwyzer ◽  
R Burkholz ◽  
L Corinzia ◽  
...  

Abstract Background Machine learning allows classifying diseases based only on raw echocardiographic imaging data and is therefore a landmark in the development of computer-assisted decision support systems in echocardiography. Purpose The present study sought to determine the value of deep (machine) learning systems for automatic discrimination of takotsubo syndrome and acute myocardial infarction. Methods Apical 2- and 4-chamber echocardiographic views of 110 patients with takotsubo syndrome and 110 patients with acute myocardial infarction were used in the development, training and validation of a deep learning approach, i.e. a convolutional autoencoder (CAE) for feature extraction followed by classical machine learning models for classification of the diseases. Results The deep learning model achieved an area under the receiver operating curve (AUC) of 0.801 with an overall accuracy of 74.5% for 5-fold cross validation evaluated on a clinically relevant dataset. In comparison, experienced cardiologists achieved AUCs in the range 0.678–0.740 and an average accuracy of 64.5% on the same dataset. Conclusions A real-time system for fully automated interpretation of echocardiographic videos was established and trained to differentiate takotsubo syndrome from acute myocardial infarction. The framework provides insight into the algorithms' decision process for physicians and yields new and valuable information on the manifestation of disease patterns in echocardiographic data. While our system was superior to cardiologists in echocardiography-based disease classification, further studies should be conducted in a larger patient population to prove its clinical application. Funding Acknowledgement Type of funding source: None


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Xinyang Li ◽  
Guoxun Zhang ◽  
Hui Qiao ◽  
Feng Bao ◽  
Yue Deng ◽  
...  

AbstractThe development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation, which is gradually changing the landscape of optical imaging and biomedical research. However, current implementations of deep learning usually operate in a supervised manner, and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability. Here, we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy, even in some cases in which supervised models cannot be applied. Through the introduction of a saliency constraint, the unsupervised model, named Unsupervised content-preserving Transformation for Optical Microscopy (UTOM), can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content. UTOM shows promising performance in a wide range of biomedical image transformation tasks, including in silico histological staining, fluorescence image restoration, and virtual fluorescence labeling. Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities. We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.


2021 ◽  
Author(s):  
J. Annrose ◽  
N. Herald Anantha Rufus ◽  
C. R. Edwin Selva Rex ◽  
D. Godwin Immanuel

Abstract Bean which is botanically called Phaseolus vulgaris L belongs to the Fabaceae family.During bean disease identification, unnecessary economical losses occur due to the delay of the treatment period, incorrect treatment, and lack of knowledge. The existing deep learning and machine learning techniques met few issues such as high computational complexity, higher cost associated with the training data, more execution time, noise, feature dimensionality, lower accuracy, low speed, etc. To tackle these problems, we have proposed a hybrid deep learning model with an Archimedes optimization algorithm (HDL-AOA) for bean disease classification. In this work, there are five bean classes of which one is a healthy class whereas the remaining four classes indicate different diseases such as Bean halo blight, Pythium diseases, Rhizoctonia root rot, and Anthracnose abnormalities acquired from the Soybean (Large) Data Set.The hybrid deep learning technique is the combination of wavelet packet decomposition (WPD) and long short term memory (LSTM). Initially, the WPD decomposes the input images into four sub-series. For these sub-series, four LSTM networks were developed. During bean disease classification, an Archimedes optimization algorithm (AOA) enhances the classification accuracy for multiple single LSTM networks. MATLAB software implements the HDL-AOA model for bean disease classification. The proposed model accomplishes lower MAPE than other exiting methods. Finally, the proposed HDL-AOA model outperforms excellent classification results using different evaluation measures such as accuracy, specificity, sensitivity, precision, recall, and F-score.


Author(s):  
Yanteng Zhang ◽  
Qizhi Teng ◽  
Linbo Qing ◽  
Yan Liu ◽  
Xiaohai He

Alzheimer’s disease (AD) is a degenerative brain disease and the most common cause of dementia. In recent years, with the widespread application of artificial intelligence in the medical field, various deep learning-based methods have been applied for AD detection using sMRI images. Many of these networks achieved AD vs HC (Healthy Control) classification accuracy of up to 90%but with a large number of computational parameters and floating point operations (FLOPs). In this paper, we adopt a novel ghost module, which uses a series of cheap operations of linear transformation to generate more feature maps, embedded into our designed ResNet architecture for task of AD vs HC classification. According to experiments on the OASIS dataset, our lightweight network achieves an optimistic accuracy of 97.92%and its total parameters are dozens of times smaller than state-of-the-art deep learning networks. Our proposed AD classification network achieves better performance while the computational cost is reduced significantly.


2021 ◽  
Vol 135 (20) ◽  
pp. 2357-2376
Author(s):  
Wei Yan Ng ◽  
Shihao Zhang ◽  
Zhaoran Wang ◽  
Charles Jit Teng Ong ◽  
Dinesh V. Gunasekeran ◽  
...  

Abstract Ophthalmology has been one of the early adopters of artificial intelligence (AI) within the medical field. Deep learning (DL), in particular, has garnered significant attention due to the availability of large amounts of data and digitized ocular images. Currently, AI in Ophthalmology is mainly focused on improving disease classification and supporting decision-making when treating ophthalmic diseases such as diabetic retinopathy, age-related macular degeneration (AMD), glaucoma and retinopathy of prematurity (ROP). However, most of the DL systems (DLSs) developed thus far remain in the research stage and only a handful are able to achieve clinical translation. This phenomenon is due to a combination of factors including concerns over security and privacy, poor generalizability, trust and explainability issues, unfavorable end-user perceptions and uncertain economic value. Overcoming this challenge would require a combination approach. Firstly, emerging techniques such as federated learning (FL), generative adversarial networks (GANs), autonomous AI and blockchain will be playing an increasingly critical role to enhance privacy, collaboration and DLS performance. Next, compliance to reporting and regulatory guidelines, such as CONSORT-AI and STARD-AI, will be required to in order to improve transparency, minimize abuse and ensure reproducibility. Thirdly, frameworks will be required to obtain patient consent, perform ethical assessment and evaluate end-user perception. Lastly, proper health economic assessment (HEA) must be performed to provide financial visibility during the early phases of DLS development. This is necessary to manage resources prudently and guide the development of DLS.


Sign in / Sign up

Export Citation Format

Share Document