scholarly journals A survey of MRI-based brain tumor segmentation methods

2014 ◽  
Vol 19 (6) ◽  
pp. 578-595 ◽  
Author(s):  
Jin Liu ◽  
Min Li ◽  
Jianxin Wang ◽  
Fangxiang Wu ◽  
Tianming Liu ◽  
...  
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Wentao Wu ◽  
Daning Li ◽  
Jiaoyang Du ◽  
Xiangyu Gao ◽  
Wen Gu ◽  
...  

Among the currently proposed brain segmentation methods, brain tumor segmentation methods based on traditional image processing and machine learning are not ideal enough. Therefore, deep learning-based brain segmentation methods are widely used. In the brain tumor segmentation method based on deep learning, the convolutional network model has a good brain segmentation effect. The deep convolutional network model has the problems of a large number of parameters and large loss of information in the encoding and decoding process. This paper proposes a deep convolutional neural network fusion support vector machine algorithm (DCNN-F-SVM). The proposed brain tumor segmentation model is mainly divided into three stages. In the first stage, a deep convolutional neural network is trained to learn the mapping from image space to tumor marker space. In the second stage, the predicted labels obtained from the deep convolutional neural network training are input into the integrated support vector machine classifier together with the test images. In the third stage, a deep convolutional neural network and an integrated support vector machine are connected in series to train a deep classifier. Run each model on the BraTS dataset and the self-made dataset to segment brain tumors. The segmentation results show that the performance of the proposed model is significantly better than the deep convolutional neural network and the integrated SVM classifier.


Author(s):  
G. Sandhya ◽  
Giri Babu Kande ◽  
Savithri T. Satya

Accurate detection of tumors in brain MR images is very important for the medical image analysis and interpretation. Tumors which are detected and treated in the early stage gives better long-term survival than those detected lately. This paper proposes a combined method of Self-Organizing –Map (SOM) and Active Contour Model (ACM) for the effective segmentation of the brain tumor from MR images. ACMs are energy-based image segmentation methods and they treat the segmentation as an optimization problem. The optimization function is formulated in terms of appropriate parameters and is designed such that the minimum value of its correspondence to a contour which is a near approximation of the real object boundary. The traditional ACMs depend on pixel intensity as well as very susceptible to parameter tuning and it turns out to be a challenge for these ACMs to deal the image objects of distinct intensities. Conversely, Neural Networks (NNs) are very effective in dealing inhomogeneities but usually results in noise due to the misclassification of pixels. Additionally, NNs deal the segmentation problems without objective function. Hence we proposed a framework for the brain tumor segmentation which integrates SOM with ACM and is termed as SOMACM. This works by exactly integrating the global information derived from the weights or prototypes of the trained SOM neurons to aid choosing whether to shrink or enlarge the present contour during the optimization process and is performed in an iterative way. The proposed method can deal with the images of complex intensity distributions, even in the presence of noise. Exploratory outcomes demonstrate the high accuracy in the segmentation results of SOMACM on different tumor images, compared to the ACM as well as the general SOM segmentation methods. Furthermore, the proposed framework is not highly sensitive to parameter tuning.


Symmetry ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 721 ◽  
Author(s):  
Jianxin Zhang ◽  
Xiaogang Lv ◽  
Hengbo Zhang ◽  
Bin Liu

Automatic segmentation of brain tumors from magnetic resonance imaging (MRI) is a challenging task due to the uneven, irregular and unstructured size and shape of tumors. Recently, brain tumor segmentation methods based on the symmetric U-Net architecture have achieved favorable performance. Meanwhile, the effectiveness of enhancing local responses for feature extraction and restoration has also been shown in recent works, which may encourage the better performance of the brain tumor segmentation problem. Inspired by this, we try to introduce the attention mechanism into the existing U-Net architecture to explore the effects of local important responses on this task. More specifically, we propose an end-to-end 2D brain tumor segmentation network, i.e., attention residual U-Net (AResU-Net), which simultaneously embeds attention mechanism and residual units into U-Net for the further performance improvement of brain tumor segmentation. AResU-Net adds a series of attention units among corresponding down-sampling and up-sampling processes, and it adaptively rescales features to effectively enhance local responses of down-sampling residual features utilized for the feature recovery of the following up-sampling process. We extensively evaluate AResU-Net on two MRI brain tumor segmentation benchmarks of BraTS 2017 and BraTS 2018 datasets. Experiment results illustrate that the proposed AResU-Net outperforms its baselines and achieves comparable performance with typical brain tumor segmentation methods.


2021 ◽  
Vol 21 (3) ◽  
pp. 1-14
Author(s):  
He-Xuan Hu ◽  
Wen-Jie Mao ◽  
Zhen-Zhou Lin ◽  
Qiang Hu ◽  
Ye Zhang

Smart hospitals are important components of smart cities. An intelligent medical system for brain tumor segmentation is required to construct smart hospitals. To achieve intelligent brain tumor segmentation, morphological variety and serious category imbalance must be managed effectively. Conventional deep neural networks have difficulty in predicting high-accuracy segmentation images due to these issues. To solve these problems, we propose using multimodal brain tumor images combined with the UNET and LSTM models to construct a new network structure with a mixed loss function to solve sample imbalance and describe an intelligent segmentation process to identify brain tumors. To verify the practicability of this algorithm, we used the open source Brain Tumor Segmentation Challenge dataset to train and verify the proposed network. We obtained DSCs of 0.91, 0.82, and 0.80; sensitivities of 0.93, 0.85, and 0.82; and specificities of 0.99, 0.99, and 0.98 in three tumor regions, including the whole tumor ( WT ), tumor core ( TC ), and enhanced tumor ( ET ). We also compared the results of the proposed network with those of other brain tumor segmentation methods, and the results showed that the proposed algorithm could segment different tumor lesions more accurately, highlighting its potential application value in the clinical diagnosis of brain tumors.


Author(s):  
Ghazanfar Latif ◽  
Jaafar Alghazo ◽  
Fadi N. Sibai ◽  
D.N.F. Awang Iskandar ◽  
Adil H. Khan

Background: Variations of image segmentation techniques, particularly those used for Brain MRI segmentation, vary in complexity from basic standard Fuzzy C-means (FCM) to more complex and enhanced FCM techniques. Objective: In this paper, a comprehensive review is presented on all thirteen variations of FCM segmentation techniques. In the review process, the concentration is on the use of FCM segmentation techniques for brain tumors. Brain tumor segmentation is a vital step in the process of automatically diagnosing brain tumors. Unlike segmentation of other types of images, brain tumor segmentation is a very challenging task due to the variations in brain anatomy. The low contrast of brain images further complicates this process. Early diagnosis of brain tumors is indeed beneficial to patients, doctors, and medical providers. Results: FCM segmentation works on images obtained from magnetic resonance imaging (MRI) scanners, requiring minor modifications to hospital operations to early diagnose tumors as most, if not all, hospitals rely on MRI machines for brain imaging. In this paper, we critically review and summarize FCM based techniques for brain MRI segmentation.


2017 ◽  
Vol 16 (2) ◽  
pp. 129-136 ◽  
Author(s):  
Tianming Zhan ◽  
Yi Chen ◽  
Xunning Hong ◽  
Zhenyu Lu ◽  
Yunjie Chen

Sign in / Sign up

Export Citation Format

Share Document