scholarly journals Data Augmentation for Brain-Tumor Segmentation: A Review

Author(s):  
Jakub Nalepa ◽  
Michal Marcinkiewicz ◽  
Michal Kawulok
2021 ◽  
Author(s):  
Radhika Malhotra ◽  
Jasleen Saini ◽  
Barjinder Singh Saini ◽  
Savita Gupta

In the past decade, there has been a remarkable evolution of convolutional neural networks (CNN) for biomedical image processing. These improvements are inculcated in the basic deep learning-based models for computer-aided detection and prognosis of various ailments. But implementation of these CNN based networks is highly dependent on large data in case of supervised learning processes. This is needed to tackle overfitting issues which is a major concern in supervised techniques. Overfitting refers to the phenomenon when a network starts learning specific patterns of the input such that it fits well on the training data but leads to poor generalization abilities on unseen data. The accessibility of enormous quantity of data limits the field of medical domain research. This paper focuses on utility of data augmentation (DA) techniques, which is a well-recognized solution to the problem of limited data. The experiments were performed on the Brain Tumor Segmentation (BraTS) dataset which is available online. The results signify that different DA approaches have upgraded the accuracies for segmenting brain tumor boundaries using CNN based model.


Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4203 ◽  
Author(s):  
Qingyun Li ◽  
Zhibin Yu ◽  
Yubo Wang ◽  
Haiyong Zheng

The high human labor demand involved in collecting paired medical imaging data severely impedes the application of deep learning methods to medical image processing tasks such as tumor segmentation. The situation is further worsened when collecting multi-modal image pairs. However, this issue can be resolved through the help of generative adversarial networks, which can be used to generate realistic images. In this work, we propose a novel framework, named TumorGAN, to generate image segmentation pairs based on unpaired adversarial training. To improve the quality of the generated images, we introduce a regional perceptual loss to enhance the performance of the discriminator. We also develop a regional L1 loss to constrain the color of the imaged brain tissue. Finally, we verify the performance of TumorGAN on a public brain tumor data set, BraTS 2017. The experimental results demonstrate that the synthetic data pairs generated by our proposed method can practically improve tumor segmentation performance when applied to segmentation network training.


2021 ◽  
pp. 1-11
Author(s):  
Ankur Biswas ◽  
Paritosh Bhattacharya ◽  
Santi P. Maity ◽  
Rita Banik

2021 ◽  
pp. 1-12
Author(s):  
K. Sambath Kumar ◽  
A. Rajendran

Manual segmentation of brain tumor is not only a tedious task that may bring human mistakes. An automatic segmentation gives results faster, and it extends the survival rate with an earlier treatment plan. So, an automatic brain tumor segmentation model, modified inception module based U-Net (IMU-Net) proposed. It takes Magnetic resonance (MR) images from the BRATS 2017 training dataset with four modalities (FLAIR, T1, T1ce, and T2). The concatenation of two series 3×3 kernels, one 5×5, and one 1×1 convolution kernels are utilized to extract the whole tumor (WT), core tumor (CT), and enhance tumor (ET). The modified inception module (IM) collects all the relevant features and provides better segmentation results. The proposed deep learning model contains 40 convolution layers and utilizes intensity normalization and data augmentation operation for further improvement. It achieved the mean dice similarity coefficient (DSC) of 0.90, 0.77, 0.74, and the mean Intersection over Union (IOU) of 0.79, 0.70, 0.70 for WT, CT, and ET during the evaluation.


2020 ◽  
Vol 64 (4) ◽  
pp. 40412-1-40412-11
Author(s):  
Kexin Bai ◽  
Qiang Li ◽  
Ching-Hsin Wang

Abstract To address the issues of the relatively small size of brain tumor image datasets, severe class imbalance, and low precision in existing segmentation algorithms for brain tumor images, this study proposes a two-stage segmentation algorithm integrating convolutional neural networks (CNNs) and conventional methods. Four modalities of the original magnetic resonance images were first preprocessed separately. Next, preliminary segmentation was performed using an improved U-Net CNN containing deep monitoring, residual structures, dense connection structures, and dense skip connections. The authors adopted a multiclass Dice loss function to deal with class imbalance and successfully prevented overfitting using data augmentation. The preliminary segmentation results subsequently served as the a priori knowledge for a continuous maximum flow algorithm for fine segmentation of target edges. Experiments revealed that the mean Dice similarity coefficients of the proposed algorithm in whole tumor, tumor core, and enhancing tumor segmentation were 0.9072, 0.8578, and 0.7837, respectively. The proposed algorithm presents higher accuracy and better stability in comparison with some of the more advanced segmentation algorithms for brain tumor images.


Sign in / Sign up

Export Citation Format

Share Document