scholarly journals Classification of Brain Tumor IDH Status using MRI and Deep Learning

2019 ◽  
Author(s):  
Sahil Nalawade ◽  
Gowtham Murugesan ◽  
Maryam Vejdani-Jahromi ◽  
Ryan A. Fisicaro ◽  
Chandan Ganesh Bangalore Yogananda ◽  
...  

AbstractIsocitrate dehydrogenase (IDH) mutation status is an important marker in glioma diagnosis and therapy. We propose a novel automated pipeline for predicting IDH status noninvasively using deep learning and T2-weighted (T2w) MR images with minimal preprocessing (N4 bias correction and normalization to zero mean and unit variance). T2w MRI and genomic data were obtained from The Cancer Imaging Archive dataset (TCIA) for 260 subjects (120 High grade and 140 Low grade gliomas). A fully automated 2D densely connected model was trained to classify IDH mutation status on 208 subjects and tested on another held-out set of 52 subjects, using 5-fold cross validation. Data leakage was avoided by ensuring subject separation during the slice-wise randomization. Mean classification accuracy of 90.5% was achieved for each axial slice in predicting the three classes of no tumor, IDH mutated and IDH wild-type. Test accuracy of 83.8% was achieved in predicting IDH mutation status for individual subjects on the test dataset of 52 subjects. We demonstrate a deep learning method to predict IDH mutation status using T2w MRI alone. Radiologic imaging studies using deep learning methods must address data leakage (subject duplication) in the randomization process to avoid upward bias in the reported classification accuracy.

2019 ◽  
Author(s):  
Chandan Ganesh Bangalore Yogananda ◽  
Bhavya R. Shah ◽  
Maryam Vejdani-Jahromi ◽  
Sahil S. Nalawade ◽  
Gowtham K. Murugesan ◽  
...  

ABSTRACTBackgroundIsocitrate dehydrogenase (IDH) mutation status has emerged as an important prognostic marker in gliomas. Currently, reliable IDH mutation determination requires invasive surgical procedures. The purpose of this study was to develop a highly-accurate, MRI-based, voxel-wise deep-learning IDH-classification network using T2-weighted (T2w) MR images and compare its performance to a multi-contrast network.MethodsMulti-parametric brain MRI data and corresponding genomic information were obtained for 214 subjects (94 IDH-mutated, 120 IDH wild-type) from The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA). Two separate networks were developed including a T2w image only network (T2-net) and a multi-contrast (T2w, FLAIR, and T1 post-contrast), network (TS-net) to perform IDH classification and simultaneous single label tumor segmentation. The networks were trained using 3D-Dense-UNets. A three-fold cross-validation was performed to generalize the networks’ performance. ROC analysis was also performed. Dice-scores were computed to determine tumor segmentation accuracy.ResultsT2-net demonstrated a mean cross-validation accuracy of 97.14% +/-0.04 in predicting IDH mutation status, with a sensitivity of 0.97 +/-0.03, specificity of 0.98 +/-0.01, and an AUC of 0.98 +/-0.01. TS-net achieved a mean cross-validation accuracy of 97.12% +/-0.09, with a sensitivity of 0.98 +/-0.02, specificity of 0.97 +/-0.001, and an AUC of 0.99 +/-0.01. The mean whole tumor segmentation Dice-scores were 0.85 +/-0.009 for T2-net and 0.89 +/-0.006 for TS-net.ConclusionWe demonstrate high IDH classification accuracy using only T2-weighted MRI. This represents an important milestone towards clinical translation.Keypoints – 1IDH status is an important prognostic marker for gliomas. 2. We developed a non-invasive, MRI based, highly accurate deep-learning method for the determination of IDH status 3. The deep-learning networks utilizes only T2 weighted MR images to predict IDH status thereby facilitating clinical translation.IMPORTANCE OF THE STUDYOne of the most important recent discoveries in brain glioma biology has been the identification of the isocitrate dehydrogenase (IDH) mutation status as a marker for therapy and prognosis. The mutated form of the gene confers a better prognosis and treatment response than gliomas with the non-mutated or wild-type form. Currently, the only reliable way to determine IDH mutation status is to obtain glioma tissue either via an invasive brain biopsy or following open surgical resection. The ability to non-invasively determine IDH mutation status has significant implications in determining therapy and predicting prognosis. We developed a highly accurate, deep learning network that utilizes only T2-weighted MR images and outperforms previously published methods. The high IDH classification accuracy of our T2w image only network (T2-net) marks an important milestone towards clinical translation. Imminent clinical translation is feasible because T2-weighted MR imaging is widely available and routinely performed in the assessment of gliomas.


2020 ◽  
Vol 10 (7) ◽  
pp. 463 ◽  
Author(s):  
Muhaddisa Barat Ali ◽  
Irene Yu-Hua Gu ◽  
Mitchel S. Berger ◽  
Johan Pallud ◽  
Derek Southwell ◽  
...  

Brain tumors, such as low grade gliomas (LGG), are molecularly classified which require the surgical collection of tissue samples. The pre-surgical or non-operative identification of LGG molecular type could improve patient counseling and treatment decisions. However, radiographic approaches to LGG molecular classification are currently lacking, as clinicians are unable to reliably predict LGG molecular type using magnetic resonance imaging (MRI) studies. Machine learning approaches may improve the prediction of LGG molecular classification through MRI, however, the development of these techniques requires large annotated data sets. Merging clinical data from different hospitals to increase case numbers is needed, but the use of different scanners and settings can affect the results and simply combining them into a large dataset often have a significant negative impact on performance. This calls for efficient domain adaption methods. Despite some previous studies on domain adaptations, mapping MR images from different datasets to a common domain without affecting subtitle molecular-biomarker information has not been reported yet. In this paper, we propose an effective domain adaptation method based on Cycle Generative Adversarial Network (CycleGAN). The dataset is further enlarged by augmenting more MRIs using another GAN approach. Further, to tackle the issue of brain tumor segmentation that requires time and anatomical expertise to put exact boundary around the tumor, we have used a tight bounding box as a strategy. Finally, an efficient deep feature learning method, multi-stream convolutional autoencoder (CAE) and feature fusion, is proposed for the prediction of molecular subtypes (1p/19q-codeletion and IDH mutation). The experiments were conducted on a total of 161 patients consisting of FLAIR and T1 weighted with contrast enhanced (T1ce) MRIs from two different institutions in the USA and France. The proposed scheme is shown to achieve the test accuracy of 74 . 81 % on 1p/19q codeletion and 81 . 19 % on IDH mutation, with marked improvement over the results obtained without domain mapping. This approach is also shown to have comparable performance to several state-of-the-art methods.


Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 249
Author(s):  
Xin Jin ◽  
Yuanwen Zou ◽  
Zhongbing Huang

The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features need to be extracted. During feature extraction, some important information may be lost, resulting in lower classification accuracy. Thus, we used a deep learning method to retain all cell features. In order to solve the problems surrounding insufficient numbers of original images and the imbalanced distribution of original images, we used the Wasserstein generative adversarial network-gradient penalty (WGAN-GP) for data augmentation. At the same time, a residual network (ResNet) was used for image classification. ResNet is one of the most used deep learning classification networks. The classification accuracy of cell cycle images was achieved more effectively with our method, reaching 83.88%. Compared with an accuracy of 79.40% in previous experiments, our accuracy increased by 4.48%. Another dataset was used to verify the effect of our model and, compared with the accuracy from previous results, our accuracy increased by 12.52%. The results showed that our new cell cycle image classification system based on WGAN-GP and ResNet is useful for the classification of imbalanced images. Moreover, our method could potentially solve the low classification accuracy in biomedical images caused by insufficient numbers of original images and the imbalanced distribution of original images.


2022 ◽  
Vol 10 (1) ◽  
pp. 0-0

Effective productivity estimates of fresh produced crops are very essential for efficient farming, commercial planning, and logistical support. In the past ten years, machine learning (ML) algorithms have been widely used for grading and classification of agricultural products in agriculture sector. However, the precise and accurate assessment of the maturity level of tomatoes using ML algorithms is still a quite challenging to achieve due to these algorithms being reliant on hand crafted features. Hence, in this paper we propose a deep learning based tomato maturity grading system that helps to increase the accuracy and adaptability of maturity grading tasks with less amount of training data. The performance of proposed system is assessed on the real tomato datasets collected from the open fields using Nikon D3500 CCD camera. The proposed approach achieved an average maturity classification accuracy of 99.8 % which seems to be quite promising in comparison to the other state of art methods.


Author(s):  
Carlos Eduardo Correia ◽  
Yoshie Umemura ◽  
Jessica R Flynn ◽  
Anne S Reiner ◽  
Edward K Avila

Abstract Purpose Many low-grade gliomas (LGG) harbor isocitrate dehydrogenase (IDH) mutations. Although IDH mutation is known to be epileptogenic, the rate of refractory seizures in LGG with IDH mutation vs wild-type had not been previously compared. We therefore compared seizure pharmacoresistance in IDH-mutated and wild-type LGGs. Methods Single-institution retrospective study of patients with histologic proven LGG, known IDH mutation status, seizures, and ≥ 2 neurology clinic encounters. Seizure history was followed until histological high-grade transformation or death. Seizures requiring ≥ 2 changes in anti-epileptic drugs were considered pharmacoresistant. Incidence rates of pharmacoresistant seizures were estimated using competing risks methodology. Results Of 135 patients, 25 patients (19%) had LGGs classified as IDH wild-type. Of those with IDH mutation, 104 (94.5%) were IDH1 R132H; only six were IDH2 R172K. 120 patients (89%) had tumor resection and 14 (10%) had biopsy. Initial post-surgical management included observation (64%), concurrent chemoradiation (23%), chemotherapy alone (9%), and radiotherapy alone (4%). Seizures became pharmacoresistant in 24 IDH-mutated patients (22%) and in 3 IDH wild-type patients (12%). The 4-year cumulative incidence of intractable seizures was 17.6% (95% CI: 10.6%-25.9%) in IDH-mutated and 11% (95% CI: 1.3%-32.6%) in IDH wild-type LGG (Gray’s P-value= 0.26). Conclusions 22% of the IDH-mutated patients developed pharmacoresistant seizures, compared to 12% of the IDH wild-type tumors.The likelihood of developing pharmacoresistant seizures in patients with LGG-related epilepsy is independent to IDH mutation status, however, IDH-mutated tumors were approximately twice as likely to experience LGG-related pharmacoresistant seizures.


2019 ◽  
Vol 11 (01n02) ◽  
pp. 1950002
Author(s):  
Rasim M. Alguliyev ◽  
Ramiz M. Aliguliyev ◽  
Fargana J. Abdullayeva

Recently, data collected from social media enable to analyze social events and make predictions about real events, based on the analysis of sentiments and opinions of users. Most cyber-attacks are carried out by hackers on the basis of discussions on social media. This paper proposes the method that predicts DDoS attacks occurrence by finding relevant texts in social media. To perform high-precision classification of texts to positive and negative classes, the CNN model with 13 layers and improved LSTM method are used. In order to predict the occurrence of the DDoS attacks in the next day, the negative and positive sentiments in social networking texts are used. To evaluate the efficiency of the proposed method experiments were conducted on Twitter data. The proposed method achieved a recall, precision, [Formula: see text]-measure, training loss, training accuracy, testing loss, and test accuracy of 0.85, 0.89, 0.87, 0.09, 0.78, 0.13, and 0.77, respectively.


2019 ◽  
Vol 9 (21) ◽  
pp. 4500 ◽  
Author(s):  
Phung ◽  
Rhee

Research on clouds has an enormous influence on sky sciences and related applications, and cloud classification plays an essential role in it. Much research has been conducted which includes both traditional machine learning approaches and deep learning approaches. Compared with traditional machine learning approaches, deep learning approaches achieved better results. However, most deep learning models need large data to train due to the large number of parameters. Therefore, they cannot get high accuracy in case of small datasets. In this paper, we propose a complete solution for high accuracy of classification of cloud image patches on small datasets. Firstly, we designed a suitable convolutional neural network (CNN) model for small datasets. Secondly, we applied regularization techniques to increase generalization and avoid overfitting of the model. Finally, we introduce a model average ensemble to reduce the variance of prediction and increase the classification accuracy. We experiment the proposed solution on the Singapore whole-sky imaging categories (SWIMCAT) dataset, which demonstrates perfect classification accuracy for most classes and confirms the robustness of the proposed model.


Author(s):  
Tong Lin ◽  
◽  
Xin Chen ◽  
Xiao Tang ◽  
Ling He ◽  
...  

This paper discusses the use of deep convolutional neural networks for radar target classification. In this paper, three parts of the work are carried out: firstly, effective data enhancement methods are used to augment the dataset and address unbalanced datasets. Second, using deep learning techniques, we explore an effective framework for classifying and identifying targets based on radar spectral map data. By using data enhancement and the framework, we achieved an overall classification accuracy of 0.946. In the end, we researched the automatic annotation of image ROI (region of interest). By adjusting the model, we obtained a 93% accuracy in automatic labeling and classification of targets for both car and cyclist categories.


2021 ◽  
Vol 924 (1) ◽  
pp. 012022
Author(s):  
Y Hendrawan ◽  
B Rohmatulloh ◽  
I Prakoso ◽  
V Liana ◽  
M R Fauzy ◽  
...  

Abstract Tempe is a traditional food originating from Indonesia, which is made from the fermentation process of soybean using Rhizopus mold. The purpose of this study was to classify three quality levels of soybean tempe i.e., fresh, consumable, and non-consumable using a convolutional neural network (CNN) based deep learning. Four types of pre-trained networks CNN were used in this study i.e. SqueezeNet, GoogLeNet, ResNet50, and AlexNet. The sensitivity analysis showed the highest quality classification accuracy of soybean tempe was 100% can be achieved when using AlexNet with SGDm optimizer and learning rate of 0.0001; GoogLeNet with Adam optimizer and learning rate 0.0001, GoogLeNet with RMSProp optimizer, and learning rate 0.0001, ResNet50 with Adam optimizer and learning rate 0.00005, ResNet50 with Adam optimizer and learning rate 0.0001, and SqueezeNet with RSMProp optimizer and learning rate 0.0001. In further testing using testing-set data, the classification accuracy based on the confusion matrix reached 98.33%. The combination of the CNN model and the low-cost digital commercial camera can later be used to detect the quality of soybean tempe with the advantages of being non-destructive, rapid, accurate, low-cost, and real-time.


2021 ◽  
Author(s):  
Lam Pham ◽  
Alexander Schindler ◽  
Mina Schutz ◽  
Jasmin Lampert ◽  
Sven Schlarb ◽  
...  

In this paper, we present deep learning frameworks for audio-visual scene classification (SC) and indicate how individual visual and audio features as well as their combination affect SC performance.Our extensive experiments, which are conducted on DCASE (IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events) Task 1B development dataset, achieve the best classification accuracy of 82.2\%, 91.1\%, and 93.9\% with audio input only, visual input only, and both audio-visual input, respectively.The highest classification accuracy of 93.9\%, obtained from an ensemble of audio-based and visual-based frameworks, shows an improvement of 16.5\% compared with DCASE baseline.


Sign in / Sign up

Export Citation Format

Share Document