scholarly journals Deep Weber Dominant Local Order Based Feature Generator and Improved Convolution Neural Network for Brain Tumor Segmentation in MR Images

This paper introduces a scheme for retrieving deep features to carry out the procedure of recognising brain tumors from MR image. Initially, the MR brain image is denoised through the Modified Decision Based Unsymmetric Trimmed Median Filter (MDBUTMF) after that the contrast of the image is improved through Contrast Limited Adaptive Histogram Equalization (CLAHE). Once the pre-processing task is completed, the next phase is to extract the feature. In order to acquire the features of pre-processed images, this article offers a feature extraction technique named Deep Weber Dominant Local Order Based Feature Generator (DWDLOBFG). Once the deep features are retrieved, the next stage is to separate the brain tumor. Improved Convolution Neural Network (ICNN) is used to achieve this procedure. To explore the efficiency of deep feature extraction and in-depth machine learning methods, four performance indicators were used: Sensitivity (SEN), Jaccard Index (JI), Dice Similarity Coefficient (DSC) and Positive Predictive Value (PPV). The investigational outputs illustrated that the DWDLOBFG and ICNN achieve best outputs than existing techniques.

2020 ◽  
Vol 57 (14) ◽  
pp. 141009
Author(s):  
冯博文 Feng Bowen ◽  
吕晓琪 Lü Xiaoqi ◽  
谷宇 Gu Yu ◽  
李菁 Li Qing ◽  
刘阳 Liu Yang

Author(s):  
Mukesh Kumar Chandrakar ◽  
Anup Mishra

Brain tumor segmentation is an emerging application of automated medical image diagnosis. Robust approach of brain tumor segmentation and detection is a research problem, and the performance metrics of the existing tumor detection methods are not appropriately known. Deep neural network using convolution neural network (CNN) is being researched in this direction, but no general architecture is found that can be used as robust method for brain tumor detection. The authors have proposed a multipath CNN architecture for brain tumor segmentation and detection, which provides improved results as compared to existing methods. The proposed work has been tested for datasets BRATS2013, BRTAS2015, and BRATS2017 with significant improvement in dice index and timing values by utilizing the capability of multipath CNN architecture, which combines both local and global paths.


2021 ◽  
Vol 7 (12) ◽  
pp. 269
Author(s):  
Ilyasse Aboussaleh ◽  
Jamal Riffi ◽  
Adnane Mohamed Mahraz ◽  
Hamid Tairi

Brain tumor is considered as one of the most serious causes of death in the world. Thus, it is very important to detect it as early as possible. In order to predict and segment the tumor, many approaches have been proposed. However, they suffer from different problems such as the necessity of the intervention of a specialist, the long required run-time and the choice of the appropriate feature extractor. To address these issues, we proposed an approach based on convolution neural network architecture aiming at predicting and segmenting simultaneously a cerebral tumor. The proposal was divided into two phases. Firstly, aiming at avoiding the use of the labeled image that implies a subject intervention of the specialist, we used a simple binary annotation that reflects the existence of the tumor or not. Secondly, the prepared image data were fed into our deep learning model in which the final classification was obtained; if the classification indicated the existence of the tumor, the brain tumor was segmented based on the feature representations generated by the convolutional neural network architectures. The proposed method was trained on the BraTS 2017 dataset with different types of gliomas. The achieved results show the performance of the proposed approach in terms of accuracy, precision, recall and Dice similarity coefficient. Our model showed an accuracy of 91% in tumor classification and a Dice similarity coefficient of 82.35% in tumor segmentation.


2018 ◽  
Vol 2018 ◽  
pp. 1-14 ◽  
Author(s):  
Shaoguo Cui ◽  
Lei Mao ◽  
Jingfeng Jiang ◽  
Chang Liu ◽  
Shuyu Xiong

Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.


2021 ◽  
Vol 21 (S2) ◽  
Author(s):  
Daobin Huang ◽  
Minghui Wang ◽  
Ling Zhang ◽  
Haichun Li ◽  
Minquan Ye ◽  
...  

Abstract Background Accurately segment the tumor region of MRI images is important for brain tumor diagnosis and radiotherapy planning. At present, manual segmentation is wildly adopted in clinical and there is a strong need for an automatic and objective system to alleviate the workload of radiologists. Methods We propose a parallel multi-scale feature fusing architecture to generate rich feature representation for accurate brain tumor segmentation. It comprises two parts: (1) Feature Extraction Network (FEN) for brain tumor feature extraction at different levels and (2) Multi-scale Feature Fusing Network (MSFFN) for merge all different scale features in a parallel manner. In addition, we use two hybrid loss functions to optimize the proposed network for the class imbalance issue. Results We validate our method on BRATS 2015, with 0.86, 0.73 and 0.61 in Dice for the three tumor regions (complete, core and enhancing), and the model parameter size is only 6.3 MB. Without any post-processing operations, our method still outperforms published state-of-the-arts methods on the segmentation results of complete tumor regions and obtains competitive performance in another two regions. Conclusions The proposed parallel structure can effectively fuse multi-level features to generate rich feature representation for high-resolution results. Moreover, the hybrid loss functions can alleviate the class imbalance issue and guide the training process. The proposed method can be used in other medical segmentation tasks.


Sign in / Sign up

Export Citation Format

Share Document