Whole Tumor Segmentation from Brain MR images using Multi-view 2D Convolutional Neural Network

Author(s):  
Ritu Lahoti ◽  
Sunil Kumar Vengalil ◽  
Punith B Venkategowda ◽  
Neelam Sinha ◽  
Vinod Veera Reddy
2021 ◽  
Vol 68 (2) ◽  
pp. 2413-2429
Author(s):  
Tapan Kumar Das ◽  
Pradeep Kumar Roy ◽  
Mohy Uddin ◽  
Kathiravan Srinivasan ◽  
Chuan-Yu Chang ◽  
...  

We consider the problem of fully automatic brain tumor segmentation in MR images containing glioblastomas. We propose a three Dimensional Convolutional Neural Network (3D MedImg-CNN) approach which achieves high performance while being extremely efficient, a balance that existing methods have struggled to achieve. Our 3D MedImg-CNN is formed directly on the raw image modalities and thus learn a characteristic representation directly from the data. We propose a new cascaded architecture with two pathways that each model normal details in tumors. Fully exploiting the convolutional nature of our model also allows us to segment a complete cerebral image in one minute. The performance of the proposed 3D MedImg-CNN with CNN segmentation method is computed using dice similarity coefficient (DSC). In experiments on the 2013, 2015 and 2017 BraTS challenges datasets; we unveil that our approach is among the most powerful methods in the literature, while also being very effective.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Sahar Gull ◽  
Shahzad Akbar ◽  
Habib Ullah Khan

Brain tumor is a fatal disease, caused by the growth of abnormal cells in the brain tissues. Therefore, early and accurate detection of this disease can save patient’s life. This paper proposes a novel framework for the detection of brain tumor using magnetic resonance (MR) images. The framework is based on the fully convolutional neural network (FCNN) and transfer learning techniques. The proposed framework has five stages which are preprocessing, skull stripping, CNN-based tumor segmentation, postprocessing, and transfer learning-based brain tumor binary classification. In preprocessing, the MR images are filtered to eliminate the noise and are improve the contrast. For segmentation of brain tumor images, the proposed CNN architecture is used, and for postprocessing, the global threshold technique is utilized to eliminate small nontumor regions that enhanced segmentation results. In classification, GoogleNet model is employed on three publicly available datasets. The experimental results depict that the proposed method is achieved average accuracies of 96.50%, 97.50%, and 98% for segmentation and 96.49%, 97.31%, and 98.79% for classification of brain tumor on BRATS2018, BRATS2019, and BRATS2020 datasets, respectively. The outcomes demonstrate that the proposed framework is effective and efficient that attained high performance on BRATS2020 dataset than the other two datasets. According to the experimentation results, the proposed framework outperforms other recent studies in the literature. In addition, this research will uphold doctors and clinicians for automatic diagnosis of brain tumor disease.


2020 ◽  
Vol 15 (2) ◽  
pp. 94-108
Author(s):  
R. Kala ◽  
P. Deepa

Background: Accurate detection of brain tumor and its severity is a challenging task in the medical field. So there is a need for developing brain tumor detecting algorithms and it is an emerging one for diagnosis, planning the treatment and outcome evaluation. Materials and Methods: Brain tumor segmentation method using deep learning classification and multi-modal composition has been developed using the deep convolutional neural networks. The different modalities of MRI such as T1, flair, T1C and T2 are given as input for the proposed method. The MR images from the different modalities are used in proportion to the information contents in the particular modality. The weights for the different modalities are calculated blockwise and the standard deviation of the block is taken as a proxy for the information content of the block. Then the convolution is performed between the input image of the T1, flair, T1C and T2 MR images and corresponding to the weight of the T1, flair, T1C, and T2 images. The convolution is summed between the different modalities of the MR images and its corresponding weight of the different modalities of the MR images to obtain a new composite image which is given as an input image to the deep convolutional neural network. The deep convolutional neural network performs segmentation through the different layers of CNN and different filter operations are performed in each layer to obtain the enhanced classification and segmented spatial consistency results. The analysis of the proposed method shows that the discriminatory information from the different modalities is effectively combined to increase the overall accuracy of segmentation. Results: The proposed deep convolutional neural network for brain tumor segmentation method has been analysed by using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013). The complete, core and enhancing regions are validated with Dice Similarity Coefficient and Jaccard similarity index metric for the Challenge, Leaderboard, and Synthetic data set. To evaluate the classification rates, the metrics such as accuracy, precision, sensitivity, specificity, under-segmentation, incorrect segmentation and over segmentation also evaluated and compared with the existing methods. Experimental results exhibit a higher degree of precision in the segmentation compared to existing methods. Conclusion: In this work, deep convolution neural network with different modalities of MR image are used to detect the brain tumor. The new input image was created by convoluting the input image of the different modalities and their weights. The weights are determined using the standard deviation of the block. Segmentation accuracy is high with efficient appearance and spatial consistency. The assessment of segmented images is completely evaluated by using well-established metrics. In future, the proposed method will be considered and evaluated with other databases and the segmentation accuracy results should be analysed with the presence of different kind of noises.


Author(s):  
Boo-Kyeong Choi ◽  
Nuwan Madusanka ◽  
Heung-Kook Choi ◽  
Jae-Hong So ◽  
Cho-Hee Kim ◽  
...  

Background: In this study, we used a convolutional neural network (CNN) to classify Alzheimer’s disease (AD), mild cognitive impairment (MCI), and normal control (NC) subjects based on images of the hippocampus region extracted from magnetic resonance (MR) images of the brain. Materials and Methods: The datasets used in this study were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI). To segment the hippocampal region automatically, the patient brain MR images were matched to the International Consortium for Brain Mapping template (ICBM) using 3D-Slicer software. Using prior knowledge and anatomical annotation label information, the hippocampal region was automatically extracted from the brain MR images. Results: The area of the hippocampus in each image was preprocessed using local entropy minimization with a bi-cubic spline model (LEMS) by an inhomogeneity intensity correction method. To train the CNN model, we separated the dataset into three groups, namely AD/NC, AD/MCI, and MCI/NC. The prediction model achieved an accuracy of 92.3% for AD/NC, 85.6% for AD/MCI, and 78.1% for MCI/NC. Conclusion: The results of this study were compared to those of previous studies, and summarized and analyzed to facilitate more flexible analyses based on additional experiments. The classification accuracy obtained by the proposed method is highly accurate. These findings suggest that this approach is efficient and may be a promising strategy to obtain good AD, MCI and NC classification performance using small patch images of hippocampus instead of whole slide images.


This paper presents brain tumor detection and segmentation using image processing techniques. Convolutional neural networks can be applied for medical research in brain tumor analysis. The tumor in the MRI scans is segmented using the K-means clustering algorithm which is applied of every scan and the feed it to the convolutional neural network for training and testing. In our CNN we propose to use ReLU and Sigmoid activation functions to determine our end result. The training is done only using the CPU power and no GPU is used. The research is done in two phases, image processing and applying neural network.


Sign in / Sign up

Export Citation Format

Share Document