scholarly journals A Novel Fully Automated Mri-Based Deep Learning Method for Classification of Idh Mutation Status in Brain Gliomas

Author(s):  
Chandan Ganesh Bangalore Yogananda ◽  
Bhavya R Shah ◽  
Maryam Vejdani-Jahromi ◽  
Sahil S Nalawade ◽  
Gowtham K Murugesan ◽  
...  

Abstract Background Isocitrate dehydrogenase (IDH) mutation status has emerged as an important prognostic marker in gliomas. Currently, reliable IDH mutation determination requires invasive surgical procedures. The purpose of this study was to develop a highly-accurate, MRI-based, voxel-wise deep-learning IDH-classification network using T2-weighted (T2w) MR images and compare its performance to a multi-contrast network. Methods Multi-parametric brain MRI data and corresponding genomic information were obtained for 214 subjects (94 IDH-mutated, 120 IDH wild-type) from The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA). Two separate networks were developed including a T2w image only network (T2-net) and a multi-contrast (T2w, FLAIR, and T1 post-contrast) network (TS-net) to perform IDH classification and simultaneous single label tumor segmentation. The networks were trained using 3D-Dense-UNets. Three-fold cross-validation was performed to generalize the networks’ performance. ROC analysis was also performed. Dice-scores were computed to determine tumor segmentation accuracy. Results T2-net demonstrated a mean cross-validation accuracy of 97.14% ±0.04 in predicting IDH mutation status, with a sensitivity of 0.97 ±0.03, specificity of 0.98 ±0.01, and an AUC of 0.98 ±0.01.  TS-net achieved a mean cross-validation accuracy of 97.12% ±0.09, with a sensitivity of 0.98 ±0.02, specificity of 0.97 ±0.001, and an AUC of 0.99 ±0.01. The mean whole tumor segmentation Dice-scores were 0.85 ±0.009 for T2-net and 0.89 ±0.006 for TS-net. Conclusion We demonstrate high IDH classification accuracy using only T2-weighted MR images. This represents an important milestone towards clinical translation.

2020 ◽  
Author(s):  
Chandan Ganesh Bangalore Yogananda ◽  
Bhavya R. Shah ◽  
Frank F. Yu ◽  
Marco C. Pinho ◽  
Sahil S. Nalawade ◽  
...  

ABSTRACTBackgroundOne of the most important recent discoveries in brain glioma biology has been the identification of the isocitrate dehydrogenase (IDH) mutation and 1p/19q co-deletion status as markers for therapy and prognosis. 1p/19q co-deletion is the defining genomic marker for oligodendrogliomas and confers a better prognosis and treatment response than gliomas without it. Our group has previously developed a highly accurate deep-learning network for determining IDH mutation status using T2-weighted MRI only. The purpose of this study was to develop a similar 1p/19q deep-learning classification network.MethodsMulti-parametric brain MRI and corresponding genomic information were obtained for 368 subjects from The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA). 1p/19 co-deletions were present in 130 subjects. 238 subjects were non co-deleted. A T2w image only network (1p/19q-net) was developed to perform 1p/19q co-deletion status classification and simultaneous single-label tumor segmentation using 3D-Dense-UNets. Threefold cross-validation was performed to generalize the network performance. ROC analysis was also performed. Dice-scores were computed to determine tumor segmentation accuracy.Results1p/19q-net demonstrated a mean cross validation accuracy of 93.46% across the 3 folds (93.4%, 94.35%, and 92.62%, standard dev=0.8) in predicting 1p/19q co-deletion status with a sensitivity and specificity of 0.90 ±0.003 and 0.95 ±0.01, respectively and a mean AUC of 0.95 ±0.01. The whole tumor segmentation mean Dice-score was 0.80 ± 0.007.ConclusionWe demonstrate high 1p/19q co-deletion classification accuracy using only T2-weighted MR images. This represents an important milestone toward using MRI to predict glioma histology, prognosis, and response to treatment.Keypoints1. 1p/19 co-deletion status is an important genetic marker for gliomas. 2. We developed a non-invasive, MRI based, highly accurate deep-learning method for the determination of 1p/19q co-deletion status that only utilizes T2 weighted MR imagesIMPORTANCE OF THE STUDYOne of the most important recent discoveries in brain glioma biology has been the identification of the isocitrate dehydrogenase (IDH) mutation and 1p/19q co-deletion status as markers for therapy and prognosis. 1p/19q co-deletion is the defining genomic marker for oligodendrogliomas and confers a better prognosis and treatment response than gliomas without it. Currently, the only reliable way to determine 1p/19q mutation status requires analysis of glioma tissue obtained either via an invasive brain biopsy or following open surgical resection. The ability to non-invasively determine 1p/19q co-deletion status has significant implications in determining therapy and predicting prognosis. We developed a highly accurate, deep learning network that utilizes only T2-weighted MR images and outperforms previously published imagebased methods. The high classification accuracy of our T2w image only network (1p/19q-net) in predicting 1p/19q co-deletion status marks an important step towards image-based stratification of brain gliomas. Imminent clinical translation is feasible because T2-weighted MR imaging is widely available and routinely performed in the assessment of gliomas.


2019 ◽  
Author(s):  
Chandan Ganesh Bangalore Yogananda ◽  
Bhavya R. Shah ◽  
Maryam Vejdani-Jahromi ◽  
Sahil S. Nalawade ◽  
Gowtham K. Murugesan ◽  
...  

ABSTRACTBackgroundIsocitrate dehydrogenase (IDH) mutation status has emerged as an important prognostic marker in gliomas. Currently, reliable IDH mutation determination requires invasive surgical procedures. The purpose of this study was to develop a highly-accurate, MRI-based, voxel-wise deep-learning IDH-classification network using T2-weighted (T2w) MR images and compare its performance to a multi-contrast network.MethodsMulti-parametric brain MRI data and corresponding genomic information were obtained for 214 subjects (94 IDH-mutated, 120 IDH wild-type) from The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA). Two separate networks were developed including a T2w image only network (T2-net) and a multi-contrast (T2w, FLAIR, and T1 post-contrast), network (TS-net) to perform IDH classification and simultaneous single label tumor segmentation. The networks were trained using 3D-Dense-UNets. A three-fold cross-validation was performed to generalize the networks’ performance. ROC analysis was also performed. Dice-scores were computed to determine tumor segmentation accuracy.ResultsT2-net demonstrated a mean cross-validation accuracy of 97.14% +/-0.04 in predicting IDH mutation status, with a sensitivity of 0.97 +/-0.03, specificity of 0.98 +/-0.01, and an AUC of 0.98 +/-0.01. TS-net achieved a mean cross-validation accuracy of 97.12% +/-0.09, with a sensitivity of 0.98 +/-0.02, specificity of 0.97 +/-0.001, and an AUC of 0.99 +/-0.01. The mean whole tumor segmentation Dice-scores were 0.85 +/-0.009 for T2-net and 0.89 +/-0.006 for TS-net.ConclusionWe demonstrate high IDH classification accuracy using only T2-weighted MRI. This represents an important milestone towards clinical translation.Keypoints – 1IDH status is an important prognostic marker for gliomas. 2. We developed a non-invasive, MRI based, highly accurate deep-learning method for the determination of IDH status 3. The deep-learning networks utilizes only T2 weighted MR images to predict IDH status thereby facilitating clinical translation.IMPORTANCE OF THE STUDYOne of the most important recent discoveries in brain glioma biology has been the identification of the isocitrate dehydrogenase (IDH) mutation status as a marker for therapy and prognosis. The mutated form of the gene confers a better prognosis and treatment response than gliomas with the non-mutated or wild-type form. Currently, the only reliable way to determine IDH mutation status is to obtain glioma tissue either via an invasive brain biopsy or following open surgical resection. The ability to non-invasively determine IDH mutation status has significant implications in determining therapy and predicting prognosis. We developed a highly accurate, deep learning network that utilizes only T2-weighted MR images and outperforms previously published methods. The high IDH classification accuracy of our T2w image only network (T2-net) marks an important milestone towards clinical translation. Imminent clinical translation is feasible because T2-weighted MR imaging is widely available and routinely performed in the assessment of gliomas.


2020 ◽  
Vol 2 (1) ◽  
Author(s):  
Chandan Ganesh Bangalore Yogananda ◽  
Bhavya R Shah ◽  
Frank F Yu ◽  
Marco C Pinho ◽  
Sahil S Nalawade ◽  
...  

Abstract Background One of the most important recent discoveries in brain glioma biology has been the identification of the isocitrate dehydrogenase (IDH) mutation and 1p/19q co-deletion status as markers for therapy and prognosis. 1p/19q co-deletion is the defining genomic marker for oligodendrogliomas and confers a better prognosis and treatment response than gliomas without it. Our group has previously developed a highly accurate deep-learning network for determining IDH mutation status using T2-weighted (T2w) MRI only. The purpose of this study was to develop a similar 1p/19q deep-learning classification network. Methods Multiparametric brain MRI and corresponding genomic information were obtained for 368 subjects from The Cancer Imaging Archive and The Cancer Genome Atlas. 1p/19 co-deletions were present in 130 subjects. Two-hundred and thirty-eight subjects were non-co-deleted. A T2w image-only network (1p/19q-net) was developed to perform 1p/19q co-deletion status classification and simultaneous single-label tumor segmentation using 3D-Dense-UNets. Three-fold cross-validation was performed to generalize the network performance. Receiver operating characteristic analysis was also performed. Dice scores were computed to determine tumor segmentation accuracy. Results 1p/19q-net demonstrated a mean cross-validation accuracy of 93.46% across the 3 folds (93.4%, 94.35%, and 92.62%, SD = 0.8) in predicting 1p/19q co-deletion status with a sensitivity and specificity of 0.90 ± 0.003 and 0.95 ± 0.01, respectively and a mean area under the curve of 0.95 ± 0.01. The whole tumor segmentation mean Dice score was 0.80 ± 0.007. Conclusion We demonstrate high 1p/19q co-deletion classification accuracy using only T2w MR images. This represents an important milestone toward using MRI to predict glioma histology, prognosis, and response to treatment.


2020 ◽  
Author(s):  
Chandan Ganesh Bangalore Yogananda ◽  
Bhavya R. Shah ◽  
Sahil S. Nalawade ◽  
Gowtham K. Murugesan ◽  
Frank F. Yu ◽  
...  

ABSTRACTPURPOSEMethylation of the O6-Methylguanine-DNA Methyltransferase (MGMT) promoter results in epigenetic silencing of the MGMT enzyme and confers an improved prognosis and treatment response in gliomas. The purpose of this study was to develop a deep-learning network for determining the methylation status of the MGMT Promoter in gliomas using T2-w magnetic resonance images only.METHODSBrain MRI and corresponding genomic information were obtained for 247 subjects from The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA). 163 subjects had a methylated MGMT promoter. A T2-w image only network (MGMT-net) was developed to determine MGMT promoter methylation status and simultaneous single label tumor segmentation. The network was trained using 3D-Dense-UNets. Three-fold cross-validation was performed to generalize the networks’ performance. Dice-scores were computed to determine tumor segmentation accuracy.RESULTSMGMT-net demonstrated a mean cross validation accuracy of 94.73% across the 3 folds (95.12%, 93.98%, and 95.12%, standard dev=0.66) in predicting MGMT methylation status with a sensitivity and specificity of 96.31% ±0.04 and 91.66% ±2.06, respectively and a mean AUC of 0.93 ±0.01. The whole tumor segmentation mean Dice-score was 0.82 ± 0.008.CONCLUSIONWe demonstrate high classification accuracy in predicting the methylation status of the MGMT promoter using only T2-w MR images that surpasses the sensitivity, specificity, and accuracy of invasive histological methods such as pyrosequencing, methylation-specific PCR, and immunofluorescence methods. This represents an important milestone toward using MRI to predict glioma histology, prognosis, and response to treatment.


2020 ◽  
Vol 22 (Supplement_2) ◽  
pp. ii167-ii167
Author(s):  
Sied Kebir ◽  
Tobias Blau ◽  
Lazaros Lazaridis ◽  
Teresa Schmidt ◽  
Kathy Keyvani ◽  
...  

Abstract BACKGROUND The determination of isocitrate dehydrogenase (IDH) mutation status plays a crucial role in the diagnosis of glioblastoma. Depending on the age of the patient and the result of the immunohistochemical analysis, additional DNA sequencing may be required to determine IDH mutation status. As DNA sequencing results can occasionally take several days until available, there is a need for inexpensive and fast non-invasive methods. In this work, we investigated whether IDH mutation detection by artificial intelligence (deep learning) from digitized hematoxylin-eosin (H&E) stained sectional specimens is feasible. METHODS Patients with histologically confirmed glioblastoma from The Cancer Genome Atlas cohort were included if digitized H&E stained whole-slide scans with corresponding information on IDH status were publicly available. The total cohort was subdivided into a training, validation, and test cohort in a ratio of 44:33:23. Whole-slide scans were partitioned into tiles of fixed size and used to train a Resnet-34 convolutional neural network. The evaluation of the trained model was performed once on the test cohort using Receiver Operating Characteristic analysis and Area-Under-The-Curve (AUC) metric. To ascertain which regions of the H&E specimens were decisive for the determination of IDH status, the Grad-CAM method was used. RESULTS 124 patients were included, 29 of which were IDH mutant. The digitized H&E slides had an average size of 2.5 gigabytes per image file and approximately 1000 tiles per slide were prepared. The prediction AUC of the trained model was 0.94. The duration of IDH prediction was about 3.5 seconds per slide. The Grad-CAM evaluation confirmed that the model mainly used cellular regions to collect decision-supporting information. CONCLUSIONS This pilot study shows the promising potential of deep learning for the prediction of IDH mutation status from digitized H&E scans in glioblastoma. To confirm these data, testing this model on an independent cohort is needed.


Author(s):  
Yoon Seong Choi ◽  
Sohi Bae ◽  
Jong Hee Chang ◽  
Seok-Gu Kang ◽  
Se Hoon Kim ◽  
...  

Abstract Background Glioma prognosis depends on isocitrate dehydrogenase (IDH) mutation status. We aimed to predict the IDH status of gliomas from preoperative MR images using a fully automated hybrid approach with convolutional neural networks (CNNs) and radiomics. Methods We reviewed 1166 preoperative MR images of gliomas (grades II–IV) from Severance Hospital (n = 856), Seoul National University Hospital (SNUH; n = 107), and The Cancer Imaging Archive (TCIA; n = 203). The Severance set was subdivided into the development (n = 727) and internal test (n = 129) sets. Based on T1 postcontrast, T2, and fluid-attenuated inversion recovery images, a fully automated model was developed that comprised a CNN for tumor segmentation (Model 1) and CNN-based classifier for IDH status prediction (Model 2) that uses a hybrid approach based on 2D tumor images and radiomic features from 3D tumor shape and loci guided by Model 1. The trained model was tested on internal (a subset of the Severance set) and external (SNUH and TCIA) test sets. Results The CNN for tumor segmentation (Model 1) achieved a dice coefficient of 0.86–0.92 across datasets. Our hybrid model achieved accuracies of 93.8%, 87.9%, and 78.8%, with areas under the receiver operating characteristic curves of 0.96, 0.94, and 0.86 and areas under the precision-recall curves of 0.88, 0.82, and 0.81 in the internal test, SNUH, and TCIA sets, respectively. Conclusions Our fully automated hybrid model demonstrated the potential to be a highly reproducible and generalizable tool across different datasets for the noninvasive prediction of the IDH status of gliomas.


2019 ◽  
Author(s):  
Sahil Nalawade ◽  
Gowtham Murugesan ◽  
Maryam Vejdani-Jahromi ◽  
Ryan A. Fisicaro ◽  
Chandan Ganesh Bangalore Yogananda ◽  
...  

AbstractIsocitrate dehydrogenase (IDH) mutation status is an important marker in glioma diagnosis and therapy. We propose a novel automated pipeline for predicting IDH status noninvasively using deep learning and T2-weighted (T2w) MR images with minimal preprocessing (N4 bias correction and normalization to zero mean and unit variance). T2w MRI and genomic data were obtained from The Cancer Imaging Archive dataset (TCIA) for 260 subjects (120 High grade and 140 Low grade gliomas). A fully automated 2D densely connected model was trained to classify IDH mutation status on 208 subjects and tested on another held-out set of 52 subjects, using 5-fold cross validation. Data leakage was avoided by ensuring subject separation during the slice-wise randomization. Mean classification accuracy of 90.5% was achieved for each axial slice in predicting the three classes of no tumor, IDH mutated and IDH wild-type. Test accuracy of 83.8% was achieved in predicting IDH mutation status for individual subjects on the test dataset of 52 subjects. We demonstrate a deep learning method to predict IDH mutation status using T2w MRI alone. Radiologic imaging studies using deep learning methods must address data leakage (subject duplication) in the randomization process to avoid upward bias in the reported classification accuracy.


2021 ◽  
Author(s):  
Shidong Li ◽  
Jianwei Liu ◽  
Zhanjie Song

Abstract Since magnetic resonance imaging (MRI) has superior soft tissue contrast, contouring (brain) tumor accurately by MRI images is essential in medical image processing. Segmenting tumor accurately is immensely challenging, since tumor and normal tissues are often inextricably intertwined in the brain. It is also extremely time consuming manually. Late deep learning techniques start to show reasonable success in brain tumor segmentation automatically. The purpose of this study is to develop a new region-ofinterest-aided (ROI-aided) deep learning technique for automatic brain tumor MRI segmentation. The method consists of two major steps. Step one is to use a 2D network with U-Net architecture to localize the tumor ROI, which is to reduce the impact of normal tissue’s disturbance. Then a 3D U-Net is performed in step 2 for tumor segmentation within identified ROI. The proposed method is validated on MICCAI BraTS 2015 Challenge with 220 high Gliomas grade (HGG) and 54 low Gliomas grade (LGG) patients’ data. The Dice similarity coefficient and the Hausdorff distance between the manual tumor contour and that segmented by the proposed method are 0.876 ±0.068 and 3.594±1.347 mm, respectively. These numbers are indications that our proposed method is an effective ROI-aided deep learning strategy for brain MRI tumor segmentation, and a valid and useful tool in medical image processing.


Brain tumors are the result of unusual growth and unrestrained cell disunity in the brain. Most of the medical image application lack in segmentation and labeling. Brain tumors can lead to loss of lives if they are not detected early and correctly. Recently, deep learning has been an important role in the field of digital health. One of its action is the reduction of manual decision in the diagnosis of diseases specifically brain tumor diagnosis needs high accuracy, where minute errors in judgment may lead to loss therefore, brain tumor segmentation is an necessary challenge in medical side. In recent time numerous ,methods exist for tumor segmentation with lack of accuracy. Deep learning is used to achieve the goal of brain tumor segmentation. In this work, three network of brain MR images segmentation is employed .A single network is compared to achieve segmentation of MR images using separate network .In this paper segmentation has improved and result is obtained with high accuracy and efficiency.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Dominik Müller ◽  
Frank Kramer

Abstract Background The increased availability and usage of modern medical imaging induced a strong need for automatic medical image segmentation. Still, current image segmentation platforms do not provide the required functionalities for plain setup of medical image segmentation pipelines. Already implemented pipelines are commonly standalone software, optimized on a specific public data set. Therefore, this paper introduces the open-source Python library MIScnn. Implementation The aim of MIScnn is to provide an intuitive API allowing fast building of medical image segmentation pipelines including data I/O, preprocessing, data augmentation, patch-wise analysis, metrics, a library with state-of-the-art deep learning models and model utilization like training, prediction, as well as fully automatic evaluation (e.g. cross-validation). Similarly, high configurability and multiple open interfaces allow full pipeline customization. Results Running a cross-validation with MIScnn on the Kidney Tumor Segmentation Challenge 2019 data set (multi-class semantic segmentation with 300 CT scans) resulted into a powerful predictor based on the standard 3D U-Net model. Conclusions With this experiment, we could show that the MIScnn framework enables researchers to rapidly set up a complete medical image segmentation pipeline by using just a few lines of code. The source code for MIScnn is available in the Git repository: https://github.com/frankkramer-lab/MIScnn.


Sign in / Sign up

Export Citation Format

Share Document