GLIA-Deep: Glioblastoma Image Analysis Using Deep Learning Convolutional Neural Networks to Accurately Classify Gene Methylation and Predict Drug Effectiveness

Author(s):  
Viraj Mehta

Glioblastoma multiforme is a deadly brain cancer with a median patient survival time of 18-24 months, despite aggressive treatments. This limited success is due to a combination of aggressive tumor behavior, genetic heterogeneity of the disease within a single patient’s tumor, resistance to therapy, and lack of precision medicine treatments. A single specimen using a biopsy cannot be used for complete assessment of the tumor’s microenvironment, making personalized care limited and challenging. Temozolomide (TMZ) is a commercially approved alkylating agent used to treat glioblastoma, but around 50% of temozolomide-treated patients do not respond to it due to the over-expression of O6-methylguanine methyltransferase (MGMT). MGMT is a DNA repair enzyme that rescues tumor cells from alkylating agent-induced damage, leading to resistance to chemotherapy drugs. Epigenetic silencing of the MGMT gene by promoter methylation results in decreased MGMT protein expression, reduced DNA repair activity, increased sensitivity to TMZ, and longer survival time. Thus, it is paramount that clinicians determine the methylation status of patients to provide personalized chemotherapy drugs. However, current methods for determining this via invasive biopsies or manually curated features from brain MRI (Magnetic Resonance Imaging) scans are time- and cost- intensive, and have a very low accuracy. Authors present a novel approach of using convolutional neural networks to predict methylation status and recommend patient-specific treatments via an analysis of brain MRI scans. The authors have developed an AI platform, GLIA-Deep, using a U-Net architecture and a ResNet-50 architecture trained on genomic data from TCGA (The Cancer Genome Atlas through the National Cancer Institute) and brain MRI scans from TCIA (The Cancer Imaging Archive). GLIA-Deep performs tumor region identification and determines MGMT methylation status with 90% accuracy in less than 5 seconds, a real-time analysis that eliminates huge time and cost investments of invasive biopsies. Using computational modeling, the analysis further recommends microRNAs that modulate MGMT gene expression by translational repression to make glioma cells TMZ sensitive, thereby improving the survival of glioblastoma patients with unmethylated MGMT. GLIA-Deep is a completely integrated, end-to-end, cost-effective and time-efficient platform that advances precision medicine by recommending personalized therapies from an analysis of individual MRI scans to improving glioblastoma treatment options.

2021 ◽  
Author(s):  
Nikhil J. Dhinagar ◽  
Sophia I. Thomopoulos ◽  
Conor Owens-Walton ◽  
Dimitris Stripelis ◽  
Jose Luis Ambite ◽  
...  

2021 ◽  
Author(s):  
Ekin Yagis ◽  
Selamawet Workalemahu Atnafu ◽  
Alba García Seco de Herrera ◽  
Chiara Marzi ◽  
Marco Giannelli ◽  
...  

Abstract In recent years, 2D convolutional neural networks (CNNs) have been extensively used for the diagnosis of neurological diseases from magnetic resonance imaging (MRI) data due to their potential to discern subtle and intricate patterns. Despite the high performances reported in numerous studies, developing CNN models with good generalization abilities is still a challenging task due to possible data leakage introduced during cross-validation (CV). In this study, we quantitatively assessed the effect of a data leakage caused by 3D MRI data splitting based on a 2D slice-level using three 2D CNN models for the classification of patients with Alzheimer’s disease (AD) and Parkinson’s disease (PD). Our experiments showed that slice-level CV erroneously boosted the average slice level accuracy on the test set by 30% on Open Access Series of Imaging Studies (OASIS), 29% on Alzheimer’s Disease Neuroimaging Initiative (ADNI), 48% on Parkinson's Progression Markers Initiative (PPMI) and 55% on a local de-novo PD Versilia dataset. Further tests on a randomly labeled OASIS-derived dataset produced about 96% of (erroneous) accuracy (slice-level split) and 50% accuracy (subject-level split), as expected from a randomized experiment. Overall, the extent of the effect of an erroneous slice-based CV is severe, especially for small datasets.


2020 ◽  
Vol 20 (11) ◽  
pp. 817
Author(s):  
Yijun Zhao ◽  
Hui Yuan ◽  
Jingjie Zhou ◽  
Samantha Martin ◽  
Heath Pardoe

2021 ◽  
Author(s):  
Sara L. Saunders ◽  
Justin M. Clark ◽  
Kyle Rudser ◽  
Anil Chauhan ◽  
Justin R. Ryder ◽  
...  

AbstractPurposeTo determine which types of magnetic resonance images give the best performance when used to train convolutional neural networks for liver segmentation and volumetry.MethodsAbdominal MRI scans were performed on 42 adolescents with obesity. Scans included Dixon imaging (giving water, fat, and T2* images) and low-resolution T2-weighted anatomical scans. Multiple convolutional neural network models using a 3D U-Net architecture were trained with different input images. Whole-liver manual segmentations were used for reference.Segmentation performance was measured using the Dice similarity coefficient (DSC) and 95% Hausdorff distance. Liver volume accuracy was evaluated using bias, precision, and normalized root mean square error (NRMSE).ResultsThe models trained using both water and fat images performed best, giving DSC = 0.94 and NRMSE = 4.2%. Models trained without the water image as input all performed worse, including in participants with elevated liver fat. Models using the T2-weighted anatomical images underperformed the Dixon-based models, but provided acceptable performance (DSC ≥ 0.92, NMRSE ≤ 6.6%) for use in longitudinal pediatric obesity interventions.ConclusionThe model using Dixon water and fat images as input gave the best performance, with results comparable to inter-reader variability and state-of-the-art methods.


Author(s):  
Ryan Hogan ◽  
Christoforos Christoforou

To inform a proper diagnosis and understanding of Alzheimer’s Disease (AD), deep learning has emerged as an alternate approach for detecting physical brain changes within magnetic resonance imaging (MRI). The advancement of deep learning within biomedical imaging, particularly in MRI scans, has proven to be an efficient resource for abnormality detection while utilizing convolutional neural networks (CNN) to perform feature mapping within multilayer perceptrons. In this study, we aim to test the feasibility of using three-dimensional convolutional neural networks to identify neurophysiological degeneration in the entire-brain scans that differentiate between AD patients and controls. In particular, we propose and train a 3D-CNN model to classify between MRI scans of cognitively-healthy individuals and AD patients. We validate our proposed model on a large dataset composed of more than seven hundred MRI scans (half AD). Our results show a validation accuracy of 79% which is at par with the current state-of-the-art. The benefits of our proposed 3D network are that it can assist in the exploration and detection of AD by mapping the complex heterogeneity of the brain, particularly in the limbic system and temporal lobe. The goal of this research is to measure the efficacy and predictability of 3D convolutional networks in detecting the progression of neurodegeneration within MRI brain scans of HC and AD patients.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 134388-134398 ◽  
Author(s):  
Zijian Wang ◽  
Yaoru Sun ◽  
Qianzi Shen ◽  
Lei Cao

Sign in / Sign up

Export Citation Format

Share Document