scholarly journals Brain Tumor Segmentation Based on Deep Learning’s Feature Representation

2021 ◽  
Vol 7 (12) ◽  
pp. 269
Author(s):  
Ilyasse Aboussaleh ◽  
Jamal Riffi ◽  
Adnane Mohamed Mahraz ◽  
Hamid Tairi

Brain tumor is considered as one of the most serious causes of death in the world. Thus, it is very important to detect it as early as possible. In order to predict and segment the tumor, many approaches have been proposed. However, they suffer from different problems such as the necessity of the intervention of a specialist, the long required run-time and the choice of the appropriate feature extractor. To address these issues, we proposed an approach based on convolution neural network architecture aiming at predicting and segmenting simultaneously a cerebral tumor. The proposal was divided into two phases. Firstly, aiming at avoiding the use of the labeled image that implies a subject intervention of the specialist, we used a simple binary annotation that reflects the existence of the tumor or not. Secondly, the prepared image data were fed into our deep learning model in which the final classification was obtained; if the classification indicated the existence of the tumor, the brain tumor was segmented based on the feature representations generated by the convolutional neural network architectures. The proposed method was trained on the BraTS 2017 dataset with different types of gliomas. The achieved results show the performance of the proposed approach in terms of accuracy, precision, recall and Dice similarity coefficient. Our model showed an accuracy of 91% in tumor classification and a Dice similarity coefficient of 82.35% in tumor segmentation.

This paper introduces a scheme for retrieving deep features to carry out the procedure of recognising brain tumors from MR image. Initially, the MR brain image is denoised through the Modified Decision Based Unsymmetric Trimmed Median Filter (MDBUTMF) after that the contrast of the image is improved through Contrast Limited Adaptive Histogram Equalization (CLAHE). Once the pre-processing task is completed, the next phase is to extract the feature. In order to acquire the features of pre-processed images, this article offers a feature extraction technique named Deep Weber Dominant Local Order Based Feature Generator (DWDLOBFG). Once the deep features are retrieved, the next stage is to separate the brain tumor. Improved Convolution Neural Network (ICNN) is used to achieve this procedure. To explore the efficiency of deep feature extraction and in-depth machine learning methods, four performance indicators were used: Sensitivity (SEN), Jaccard Index (JI), Dice Similarity Coefficient (DSC) and Positive Predictive Value (PPV). The investigational outputs illustrated that the DWDLOBFG and ICNN achieve best outputs than existing techniques.


2018 ◽  
Vol 2018 ◽  
pp. 1-14 ◽  
Author(s):  
Shaoguo Cui ◽  
Lei Mao ◽  
Jingfeng Jiang ◽  
Chang Liu ◽  
Shuyu Xiong

Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 320
Author(s):  
Yue Zhao ◽  
Xiaoqiang Ren ◽  
Kun Hou ◽  
Wentao Li

Automated brain tumor segmentation based on 3D magnetic resonance imaging (MRI) is critical to disease diagnosis. Moreover, robust and accurate achieving automatic extraction of brain tumor is a big challenge because of the inherent heterogeneity of the tumor structure. In this paper, we present an efficient semantic segmentation 3D recurrent multi-fiber network (RMFNet), which is based on encoder–decoder architecture to segment the brain tumor accurately. 3D RMFNet is applied in our paper to solve the problem of brain tumor segmentation, including a 3D recurrent unit and 3D multi-fiber unit. First of all, we propose that recurrent units segment brain tumors by connecting recurrent units and convolutional layers. This quality enhances the model’s ability to integrate contextual information and is of great significance to enhance the contextual information. Then, a 3D multi-fiber unit is added to the overall network to solve the high computational cost caused by the use of a 3D network architecture to capture local features. 3D RMFNet combines both advantages from a 3D recurrent unit and 3D multi-fiber unit. Extensive experiments on the Brain Tumor Segmentation (BraTS) 2018 challenge dataset show that our RMFNet remarkably outperforms state-of-the-art methods, and achieves average Dice scores of 89.62%, 83.65% and 78.72% for the whole tumor, tumor core and enhancing tumor, respectively. The experimental results prove our architecture to be an efficient tool for brain tumor segmentation accurately.


2021 ◽  
Vol 21 (S2) ◽  
Author(s):  
Daobin Huang ◽  
Minghui Wang ◽  
Ling Zhang ◽  
Haichun Li ◽  
Minquan Ye ◽  
...  

Abstract Background Accurately segment the tumor region of MRI images is important for brain tumor diagnosis and radiotherapy planning. At present, manual segmentation is wildly adopted in clinical and there is a strong need for an automatic and objective system to alleviate the workload of radiologists. Methods We propose a parallel multi-scale feature fusing architecture to generate rich feature representation for accurate brain tumor segmentation. It comprises two parts: (1) Feature Extraction Network (FEN) for brain tumor feature extraction at different levels and (2) Multi-scale Feature Fusing Network (MSFFN) for merge all different scale features in a parallel manner. In addition, we use two hybrid loss functions to optimize the proposed network for the class imbalance issue. Results We validate our method on BRATS 2015, with 0.86, 0.73 and 0.61 in Dice for the three tumor regions (complete, core and enhancing), and the model parameter size is only 6.3 MB. Without any post-processing operations, our method still outperforms published state-of-the-arts methods on the segmentation results of complete tumor regions and obtains competitive performance in another two regions. Conclusions The proposed parallel structure can effectively fuse multi-level features to generate rich feature representation for high-resolution results. Moreover, the hybrid loss functions can alleviate the class imbalance issue and guide the training process. The proposed method can be used in other medical segmentation tasks.


Diagnostics ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 110 ◽  
Author(s):  
Pius Kwao Gadosey ◽  
Yujian Li ◽  
Enock Adjei Agyekum ◽  
Ting Zhang ◽  
Zhaoying Liu ◽  
...  

During image segmentation tasks in computer vision, achieving high accuracy performance while requiring fewer computations and faster inference is a big challenge. This is especially important in medical imaging tasks but one metric is usually compromised for the other. To address this problem, this paper presents an extremely fast, small and computationally effective deep neural network called Stripped-Down UNet (SD-UNet), designed for the segmentation of biomedical data on devices with limited computational resources. By making use of depthwise separable convolutions in the entire network, we design a lightweight deep convolutional neural network architecture inspired by the widely adapted U-Net model. In order to recover the expected performance degradation in the process, we introduce a weight standardization algorithm with the group normalization method. We demonstrate that SD-UNet has three major advantages including: (i) smaller model size (23x smaller than U-Net); (ii) 8x fewer parameters; and (iii) faster inference time with a computational complexity lower than 8M floating point operations (FLOPs). Experiments on the benchmark dataset of the Internatioanl Symposium on Biomedical Imaging (ISBI) challenge for segmentation of neuronal structures in electron microscopic (EM) stacks and the Medical Segmentation Decathlon (MSD) challenge brain tumor segmentation (BRATs) dataset show that the proposed model achieves comparable and sometimes better results compared to the current state-of-the-art.


This paper presents brain tumor detection and segmentation using image processing techniques. Convolutional neural networks can be applied for medical research in brain tumor analysis. The tumor in the MRI scans is segmented using the K-means clustering algorithm which is applied of every scan and the feed it to the convolutional neural network for training and testing. In our CNN we propose to use ReLU and Sigmoid activation functions to determine our end result. The training is done only using the CPU power and no GPU is used. The research is done in two phases, image processing and applying neural network.


Sign in / Sign up

Export Citation Format

Share Document