scholarly journals Segmentation of Brain Gliomas Based on a Multi-modal Multi-scale Double-pathway 3D Residual CNN

2020 ◽  
Author(s):  
Mingyuan Pan ◽  
Yonghong Shi ◽  
Zhijian Song

Abstract Background: The automated segmentation of brain gliomas regions in magnetic resonance (MR) images plays an important role in the early diagnosis, intraoperative navigation, radiotherapy planning and prognosis of brain tumors. It is very challenging to segment gliomas and intratumoral structures since the location, size, shape, edema range and boundary of gliomas are heterogeneous, and multimodal brain gliomas images (such as T1, T2, fluid-attenuated inversion recovery (FLAIR), and T1c images) are collected from multiple radiation centers. Methods: This paper presents a multimodal, multi-scale, double-pathway, 3D residual convolution neural network (CNN) for automatic gliomas segmentation. First, a robust gray-level normalization method is proposed to solve the multicenter problem, such as very different intensity ranges due to different imaging protocols. Second, a multi-scale, double-pathway network based on DeepMedic toolkit is trained with different combinations of multimodal MR images for gliomas segmentation. Finally, a fully connected conditional random field (CRF) is used as a post-processing strategy to optimize the segmentation results for addressing the isolated segmentations and holes. Results: Experiments on the Multimodal Brain Tumor Segmentation (BraTS) 2017 and 2019 challenge data show that our methods achieve a good performance in delineating the whole tumor with a Dice coefficient, a sensitivity and a positive predictive value (PPV) of 0.88, 0.89 and 0.88, respectively. Regarding the segmentation of the tumor core and the enhancing area, the sensitivity reached 0.80. Conclusions: Experiments show that our method can accurately segment gliomas and intratumoral structures from multimodal MR images, and it is of great significance to clinical neurosurgery.

2020 ◽  
Vol 10 (11) ◽  
pp. 2784-2794
Author(s):  
Mingyuan Pan ◽  
Yonghong Shi ◽  
Zhijian Song

The automatic segmentation of brain tumors in magnetic resonance (MR) images is very important in the diagnosis, radiotherapy planning, surgical navigation and several other clinical processes. As the location, size, shape, boundary of gliomas are heterogeneous, segmenting gliomas and intratumoral structures is very difficult. Besides, the multi-center issue makes it more challenging that multimodal brain gliomas images (such as T1, T2, fluid-attenuated inversion recovery (FLAIR), and T1c images) are from different radiation centers. This paper presents a multimodal, multi-scale, double-pathway, 3D residual convolution neural network (CNN) for automatic gliomas segmentation. In the pre-processing step, a robust gray-level normalization method is proposed to solve the multi-center problem, that the intensity range from deferent centers varies a lot. Then, a doublepathway 3D architecture based on DeepMedic toolkit is trained using multi-modality information to fuse the local and context features. In the post-processing step, a fully connected conditional random field (CRF) is built to improve the performance, filling and connecting the isolated segmentations and holes. Experiments on the Multimodal Brain Tumor Segmentation (BRATS) 2017 and 2019 dataset showed that this methods can delineate the whole tumor with a Dice coefficient, a sensitivity and a positive predictive value (PPV) of 0.88, 0.89 and 0.88, respectively. As for the segmentation of the tumor core and the enhancing area, the sensitivity reached 0.80. The results indicated that this method can segment gliomas and intratumoral structures from multimodal MR images accurately, and it possesses a clinical practice value.


2021 ◽  
Vol 21 (S2) ◽  
Author(s):  
Daobin Huang ◽  
Minghui Wang ◽  
Ling Zhang ◽  
Haichun Li ◽  
Minquan Ye ◽  
...  

Abstract Background Accurately segment the tumor region of MRI images is important for brain tumor diagnosis and radiotherapy planning. At present, manual segmentation is wildly adopted in clinical and there is a strong need for an automatic and objective system to alleviate the workload of radiologists. Methods We propose a parallel multi-scale feature fusing architecture to generate rich feature representation for accurate brain tumor segmentation. It comprises two parts: (1) Feature Extraction Network (FEN) for brain tumor feature extraction at different levels and (2) Multi-scale Feature Fusing Network (MSFFN) for merge all different scale features in a parallel manner. In addition, we use two hybrid loss functions to optimize the proposed network for the class imbalance issue. Results We validate our method on BRATS 2015, with 0.86, 0.73 and 0.61 in Dice for the three tumor regions (complete, core and enhancing), and the model parameter size is only 6.3 MB. Without any post-processing operations, our method still outperforms published state-of-the-arts methods on the segmentation results of complete tumor regions and obtains competitive performance in another two regions. Conclusions The proposed parallel structure can effectively fuse multi-level features to generate rich feature representation for high-resolution results. Moreover, the hybrid loss functions can alleviate the class imbalance issue and guide the training process. The proposed method can be used in other medical segmentation tasks.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Chih-Wei Lin ◽  
Yu Hong ◽  
Jinfu Liu

Abstract Background Glioma is a malignant brain tumor; its location is complex and is difficult to remove surgically. To diagnosis the brain tumor, doctors can precisely diagnose and localize the disease using medical images. However, the computer-assisted diagnosis for the brain tumor diagnosis is still the problem because the rough segmentation of the brain tumor makes the internal grade of the tumor incorrect. Methods In this paper, we proposed an Aggregation-and-Attention Network for brain tumor segmentation. The proposed network takes the U-Net as the backbone, aggregates multi-scale semantic information, and focuses on crucial information to perform brain tumor segmentation. To this end, we proposed an enhanced down-sampling module and Up-Sampling Layer to compensate for the information loss. The multi-scale connection module is to construct the multi-receptive semantic fusion between encoder and decoder. Furthermore, we designed a dual-attention fusion module that can extract and enhance the spatial relationship of magnetic resonance imaging and applied the strategy of deep supervision in different parts of the proposed network. Results Experimental results show that the performance of the proposed framework is the best on the BraTS2020 dataset, compared with the-state-of-art networks. The performance of the proposed framework surpasses all the comparison networks, and its average accuracies of the four indexes are 0.860, 0.885, 0.932, and 1.2325, respectively. Conclusions The framework and modules of the proposed framework are scientific and practical, which can extract and aggregate useful semantic information and enhance the ability of glioma segmentation.


Author(s):  
Palash Ghosal ◽  
Shanmukha Reddy ◽  
Charan Sai ◽  
Vikas Pandey ◽  
Jayasree Chakraborty ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document