scholarly journals A Deep Learning Framework for Segmenting Brain Tumors Using MRI and Synthetically Generated CT Images

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 523
Author(s):  
Kh Tohidul Islam ◽  
Sudanthi Wijewickrema ◽  
Stephen O’Leary

Multi-modal three-dimensional (3-D) image segmentation is used in many medical applications, such as disease diagnosis, treatment planning, and image-guided surgery. Although multi-modal images provide information that no single image modality alone can provide, integrating such information to be used in segmentation is a challenging task. Numerous methods have been introduced to solve the problem of multi-modal medical image segmentation in recent years. In this paper, we propose a solution for the task of brain tumor segmentation. To this end, we first introduce a method of enhancing an existing magnetic resonance imaging (MRI) dataset by generating synthetic computed tomography (CT) images. Then, we discuss a process of systematic optimization of a convolutional neural network (CNN) architecture that uses this enhanced dataset, in order to customize it for our task. Using publicly available datasets, we show that the proposed method outperforms similar existing methods.

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Zhuqing Yang

Medical image segmentation (IS) is a research field in image processing. Deep learning methods are used to automatically segment organs, tissues, or tumor regions in medical images, which can assist doctors in diagnosing diseases. Since most IS models based on convolutional neural network (CNN) are two-dimensional models, they are not suitable for three-dimensional medical imaging. On the contrary, the three-dimensional segmentation model has problems such as complex network structure and large amount of calculation. Therefore, this study introduces the self-excited compressed dilated convolution (SECDC) module on the basis of the 3D U-Net network and proposes an improved 3D U-Net network model. In the SECDC module, the calculation amount of the model can be reduced by 1 × 1 × 1 convolution. Combining normal convolution and cavity convolution with an expansion rate of 2 can dig out the multiview features of the image. At the same time, the 3D squeeze-and-excitation (3D-SE) module can realize automatic learning of the importance of each layer. The experimental results on the BraTS2019 dataset show that the Dice coefficient and other indicators obtained by the model used in this paper indicate that the overall tumor can reach 0.87, the tumor core can reach 0.84, and the most difficult to segment enhanced tumor can reach 0.80. From the evaluation indicators, it can be analyzed that the improved 3D U-Net model used can greatly reduce the amount of data while achieving better segmentation results, and the model has better robustness. This model can meet the clinical needs of brain tumor segmentation methods.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 268
Author(s):  
Yeganeh Jalali ◽  
Mansoor Fateh ◽  
Mohsen Rezvani ◽  
Vahid Abolghasemi ◽  
Mohammad Hossein Anisi

Lung CT image segmentation is a key process in many applications such as lung cancer detection. It is considered a challenging problem due to existing similar image densities in the pulmonary structures, different types of scanners, and scanning protocols. Most of the current semi-automatic segmentation methods rely on human factors therefore it might suffer from lack of accuracy. Another shortcoming of these methods is their high false-positive rate. In recent years, several approaches, based on a deep learning framework, have been effectively applied in medical image segmentation. Among existing deep neural networks, the U-Net has provided great success in this field. In this paper, we propose a deep neural network architecture to perform an automatic lung CT image segmentation process. In the proposed method, several extensive preprocessing techniques are applied to raw CT images. Then, ground truths corresponding to these images are extracted via some morphological operations and manual reforms. Finally, all the prepared images with the corresponding ground truth are fed into a modified U-Net in which the encoder is replaced with a pre-trained ResNet-34 network (referred to as Res BCDU-Net). In the architecture, we employ BConvLSTM (Bidirectional Convolutional Long Short-term Memory)as an advanced integrator module instead of simple traditional concatenators. This is to merge the extracted feature maps of the corresponding contracting path into the previous expansion of the up-convolutional layer. Finally, a densely connected convolutional layer is utilized for the contracting path. The results of our extensive experiments on lung CT images (LIDC-IDRI database) confirm the effectiveness of the proposed method where a dice coefficient index of 97.31% is achieved.


Author(s):  
Shaohua Li ◽  
Xiuchao Sui ◽  
Xiangde Luo ◽  
Xinxing Xu ◽  
Yong Liu ◽  
...  

Medical image segmentation is important for computer-aided diagnosis. Good segmentation demands the model to see the big picture and fine details simultaneously, i.e., to learn image features that incorporate large context while keep high spatial resolutions. To approach this goal, the most widely used methods -- U-Net and variants, extract and fuse multi-scale features. However, the fused features still have small "effective receptive fields" with a focus on local image cues, limiting their performance. In this work, we propose Segtran, an alternative segmentation framework based on transformers, which have unlimited "effective receptive fields" even at high feature resolutions. The core of Segtran is a novel Squeeze-and-Expansion transformer: a squeezed attention block regularizes the self attention of transformers, and an expansion block learns diversified representations. Additionally, we propose a new positional encoding scheme for transformers, imposing a continuity inductive bias for images. Experiments were performed on 2D and 3D medical image segmentation tasks: optic disc/cup segmentation in fundus images (REFUGE'20 challenge), polyp segmentation in colonoscopy images, and brain tumor segmentation in MRI scans (BraTS'19 challenge). Compared with representative existing methods, Segtran consistently achieved the highest segmentation accuracy, and exhibited good cross-domain generalization capabilities.


2014 ◽  
Vol 3 (2) ◽  
pp. 14-32
Author(s):  
Mithun Kumar PK ◽  
Mohammad Motiur Rahman

The calcification plaque is a one kind of artifacts or noises, which is occurred in the Computed Tomography (CT) images as a very high attenuation coefficient. Computed Tomography (CT) images are more helpful than other modalities (e.g. Ultrasonic Imaging, Magnetic Resonance Imaging (MRI) etc.) for disease diagnosis but unfortunately, CT image is an affected sometime by calcification plaque. Medical image segmentation cannot be optimum because of having calcification in the CT images, which is absolutely unexpected. The calcification plaque is the major problem for optimal organ segmentation and detection. This proposed task is a subjective as well as an effective for calcification alleviation from CT images. In this paper, Firstly, we applied the Fisher's Discriminant Analysis (FDA) for optimal threshold value estimation. Secondly, the proposed optimal threshold value is used for the optimal threshold image extraction. After this, the morphological operation is used for heavy calcification erosion and the XOR operation is used for adjusting the optimal threshold image with the input image. Finally, we implemented the Extra-Energy Reduction (EER) Function to smooth the desired image. Therefore, our investigated method is the most significant and articulate in order to attenuate calcification plaque from CT images.


2021 ◽  
Vol 11 ◽  
Author(s):  
He Huang ◽  
Guang Yang ◽  
Wenbo Zhang ◽  
Xiaomei Xu ◽  
Weiji Yang ◽  
...  

Glioma is the most common primary central nervous system tumor, accounting for about half of all intracranial primary tumors. As a non-invasive examination method, MRI has an extremely important guiding role in the clinical intervention of tumors. However, manually segmenting brain tumors from MRI requires a lot of time and energy for doctors, which affects the implementation of follow-up diagnosis and treatment plans. With the development of deep learning, medical image segmentation is gradually automated. However, brain tumors are easily confused with strokes and serious imbalances between classes make brain tumor segmentation one of the most difficult tasks in MRI segmentation. In order to solve these problems, we propose a deep multi-task learning framework and integrate a multi-depth fusion module in the framework to accurately segment brain tumors. In this framework, we have added a distance transform decoder based on the V-Net, which can make the segmentation contour generated by the mask decoder more accurate and reduce the generation of rough boundaries. In order to combine the different tasks of the two decoders, we weighted and added their corresponding loss functions, where the distance map prediction regularized the mask prediction. At the same time, the multi-depth fusion module in the encoder can enhance the ability of the network to extract features. The accuracy of the model will be evaluated online using the multispectral MRI records of the BraTS 2018, BraTS 2019, and BraTS 2020 datasets. This method obtains high-quality segmentation results, and the average Dice is as high as 78%. The experimental results show that this model has great potential in segmenting brain tumors automatically and accurately.


2021 ◽  
Author(s):  
Nabila Abraham

Convolutional neural networks have been asserted to be fast and precise frameworks with great potential in image segmentation. Within the medical domain, image segmentation is a pre-cursor to several applications including surgical simulations, treatment planning and patient prognosis. In this thesis, we attempt to solve two major limitations of current segmentation practices: 1) dealing with unbalanced classes and 2) dealing with multiple modalities. In medical imaging, unbalanced classes present as the regions of interest that are typically significantly smaller in volume than the background class or other classes. We propose an improvement to the current gold standard cost function to boost the focus of the network to the smaller classes. Another problem within medical imaging is the variation in both anatomy and pathology across patients. Utilizing multiple imaging modalities provides complementary, segmentation-specific information and is commonly employed by radiologists when contouring data. We propose a image fusion strategy for multi-modal data that uses the variation in modality specific features to guide the task specific learning. Together, our contributions propose a framework to maximize the representational power of the dataset using models with less complexity and higher generalizability. Our contributions outperform baseline models for multi-class segmentation and are modular enough to be scaled up to deeper networks. We demonstrate the effectiveness of the proposed cost function and multimodal framework, both individually and together, on benchmark datasets including the Breast Ultrasound Dataset B (BUS) [1], the International Skin Imaging Collaboration (ISIC 2018) [2], [3] and the Brain Tumor Segmentation Challenge (BraTs 2018) [4]. In all experiments, the proposed methods match or outperform the baseline methods while employing simpler networks


2021 ◽  
Author(s):  
Nabila Abraham

Convolutional neural networks have been asserted to be fast and precise frameworks with great potential in image segmentation. Within the medical domain, image segmentation is a pre-cursor to several applications including surgical simulations, treatment planning and patient prognosis. In this thesis, we attempt to solve two major limitations of current segmentation practices: 1) dealing with unbalanced classes and 2) dealing with multiple modalities. In medical imaging, unbalanced classes present as the regions of interest that are typically significantly smaller in volume than the background class or other classes. We propose an improvement to the current gold standard cost function to boost the focus of the network to the smaller classes. Another problem within medical imaging is the variation in both anatomy and pathology across patients. Utilizing multiple imaging modalities provides complementary, segmentation-specific information and is commonly employed by radiologists when contouring data. We propose a image fusion strategy for multi-modal data that uses the variation in modality specific features to guide the task specific learning. Together, our contributions propose a framework to maximize the representational power of the dataset using models with less complexity and higher generalizability. Our contributions outperform baseline models for multi-class segmentation and are modular enough to be scaled up to deeper networks. We demonstrate the effectiveness of the proposed cost function and multimodal framework, both individually and together, on benchmark datasets including the Breast Ultrasound Dataset B (BUS) [1], the International Skin Imaging Collaboration (ISIC 2018) [2], [3] and the Brain Tumor Segmentation Challenge (BraTs 2018) [4]. In all experiments, the proposed methods match or outperform the baseline methods while employing simpler networks


2018 ◽  
Vol 2018 ◽  
pp. 1-15
Author(s):  
Chuin-Mu Wang ◽  
Chieh-Ling Huang ◽  
Sheng-Chih Yang

Three-dimensional (3D) medical image segmentation is used to segment the target (a lesion or an organ) in 3D medical images. Through this process, 3D target information is obtained; hence, this technology is an important auxiliary tool for medical diagnosis. Although some methods have proved to be successful for two-dimensional (2D) image segmentation, their direct use in the 3D case has been unsatisfactory. To obtain more precise tumor segmentation results from 3D MR images, in this paper, we propose a method known as the 3D shape-weighted level set method (3D-SLSM). The proposed method first converts the LSM, which is superior with respect to 2D image segmentation, into a 3D algorithm that is suitable for overall calculations in 3D image models, and which improves the efficiency and accuracy of calculations. A 3D shape-weighted value is then added for each 3D-SLSM iterative process according to the changes in volume. Besides increasing the convergence rate and eliminating background noise, this shape-weighted value also brings the segmented contour closer to the actual tumor margins. To perform a quantitative analysis of 3D-SLSM and to examine its feasibility in clinical applications, we have divided our experiments into computer-simulated sequence images and actual breast MRI cases. Subsequently, we simultaneously compared various existing 3D segmentation methods. The experimental results demonstrated that 3D-SLSM exhibited precise segmentation results for both types of experimental images. In addition, 3D-SLSM showed better results for quantitative data compared with existing 3D segmentation methods.


2017 ◽  
pp. 1258-1280
Author(s):  
Mithun Kumar PK ◽  
Mohammad Motiur Rahman

The calcification plaque is a one kind of artifacts or noises, which is occurred in the Computed Tomography (CT) images as a very high attenuation coefficient. Computed Tomography (CT) images are more helpful than other modalities (e.g. Ultrasonic Imaging, Magnetic Resonance Imaging (MRI) etc.) for disease diagnosis but unfortunately, CT image is an affected sometime by calcification plaque. Medical image segmentation cannot be optimum because of having calcification in the CT images, which is absolutely unexpected. The calcification plaque is the major problem for optimal organ segmentation and detection. This proposed task is a subjective as well as an effective for calcification alleviation from CT images. In this paper, Firstly, we applied the Fisher's Discriminant Analysis (FDA) for optimal threshold value estimation. Secondly, the proposed optimal threshold value is used for the optimal threshold image extraction. After this, the morphological operation is used for heavy calcification erosion and the XOR operation is used for adjusting the optimal threshold image with the input image. Finally, we implemented the Extra-Energy Reduction (EER) Function to smooth the desired image. Therefore, our investigated method is the most significant and articulate in order to attenuate calcification plaque from CT images.


Author(s):  
Danbing Zou ◽  
Qikui Zhu ◽  
Pingkun Yan

Domain adaptation aims to alleviate the problem of retraining a pre-trained model when applying it to a different domain, which requires large amount of additional training data of the target domain. Such an objective is usually achieved by establishing connections between the source domain labels and target domain data. However, this imbalanced source-to-target one way pass may not eliminate the domain gap, which limits the performance of the pre-trained model. In this paper, we propose an innovative Dual-Scheme Fusion Network (DSFN) for unsupervised domain adaptation. By building both source-to-target and target-to-source connections, this balanced joint information flow helps reduce the domain gap to further improve the network performance. The mechanism is further applied to the inference stage, where both the original input target image and the generated source images are segmented with the proposed joint network. The results are fused to obtain more robust segmentation. Extensive experiments of unsupervised cross-modality medical image segmentation are conducted on two tasks -- brain tumor segmentation and cardiac structures segmentation. The experimental results show that our method achieved significant performance improvement over other state-of-the-art domain adaptation methods.


Sign in / Sign up

Export Citation Format

Share Document