scholarly journals Glioblastoma Surgery Imaging–Reporting and Data System: Validation and Performance of the Automated Segmentation Task

Cancers ◽  
2021 ◽  
Vol 13 (18) ◽  
pp. 4674
Author(s):  
David Bouget ◽  
Roelant S. Eijgelaar ◽  
André Pedersen ◽  
Ivar Kommers ◽  
Hilko Ardon ◽  
...  

For patients with presumed glioblastoma, essential tumor characteristics are determined from preoperative MR images to optimize the treatment strategy. This procedure is time-consuming and subjective, if performed by crude eyeballing or manually. The standardized GSI-RADS aims to provide neurosurgeons with automatic tumor segmentations to extract tumor features rapidly and objectively. In this study, we improved automatic tumor segmentation and compared the agreement with manual raters, describe the technical details of the different components of GSI-RADS, and determined their speed. Two recent neural network architectures were considered for the segmentation task: nnU-Net and AGU-Net. Two preprocessing schemes were introduced to investigate the tradeoff between performance and processing speed. A summarized description of the tumor feature extraction and standardized reporting process is included. The trained architectures for automatic segmentation and the code for computing the standardized report are distributed as open-source and as open-access software. Validation studies were performed on a dataset of 1594 gadolinium-enhanced T1-weighted MRI volumes from 13 hospitals and 293 T1-weighted MRI volumes from the BraTS challenge. The glioblastoma tumor core segmentation reached a Dice score slightly below 90%, a patientwise F1-score close to 99%, and a 95th percentile Hausdorff distance slightly below 4.0 mm on average with either architecture and the heavy preprocessing scheme. A patient MRI volume can be segmented in less than one minute, and a standardized report can be generated in up to five minutes. The proposed GSI-RADS software showed robust performance on a large collection of MRI volumes from various hospitals and generated results within a reasonable runtime.

2020 ◽  
Vol 10 (11) ◽  
pp. 2784-2794
Author(s):  
Mingyuan Pan ◽  
Yonghong Shi ◽  
Zhijian Song

The automatic segmentation of brain tumors in magnetic resonance (MR) images is very important in the diagnosis, radiotherapy planning, surgical navigation and several other clinical processes. As the location, size, shape, boundary of gliomas are heterogeneous, segmenting gliomas and intratumoral structures is very difficult. Besides, the multi-center issue makes it more challenging that multimodal brain gliomas images (such as T1, T2, fluid-attenuated inversion recovery (FLAIR), and T1c images) are from different radiation centers. This paper presents a multimodal, multi-scale, double-pathway, 3D residual convolution neural network (CNN) for automatic gliomas segmentation. In the pre-processing step, a robust gray-level normalization method is proposed to solve the multi-center problem, that the intensity range from deferent centers varies a lot. Then, a doublepathway 3D architecture based on DeepMedic toolkit is trained using multi-modality information to fuse the local and context features. In the post-processing step, a fully connected conditional random field (CRF) is built to improve the performance, filling and connecting the isolated segmentations and holes. Experiments on the Multimodal Brain Tumor Segmentation (BRATS) 2017 and 2019 dataset showed that this methods can delineate the whole tumor with a Dice coefficient, a sensitivity and a positive predictive value (PPV) of 0.88, 0.89 and 0.88, respectively. As for the segmentation of the tumor core and the enhancing area, the sensitivity reached 0.80. The results indicated that this method can segment gliomas and intratumoral structures from multimodal MR images accurately, and it possesses a clinical practice value.


2013 ◽  
Author(s):  
Sérgio Pereira ◽  
Joana Festa ◽  
José António Mariz ◽  
Nuno Sousa ◽  
Carlos Silva

This work is integrated in the MICCAI Grand Challenge: MR Brain Image Segmentation 2013. It aims for the automatic segmentation of brain into Cerebrospinal fluid (CSF), Gray matter (GM) and White matter (WM). The provided dataset contains patients with white matter lesions, which makes the segmentation task more challenging. The proposed algorithm uses multi-sequence MR images to extract meaningful features and learn a Random Decision Forest that classifies each voxel of the image. The results show that it is robust to the presence of the white matter lesions, and the metrics show that the overall results are competitive.


2020 ◽  
Vol 22 (Supplement_2) ◽  
pp. ii153-ii154
Author(s):  
Satoshi Takahashi ◽  
Masamichi Takahashi ◽  
Manabu Kinoshita ◽  
Mototaka Miyake ◽  
Risa Kawaguchi ◽  
...  

Abstract BACKGROUND Manual segmentation of brain tumor images from a large volume of MR images generated in clinical routines is difficult and time-consuming. Hence, it is imperative to develop a machine learning model for automated segmentation of brain tumor images. PURPOSE Machine learning models for automated MR image segmentation of gliomas may be useful. However, the image differences among facilities cause performance degradation and impede successful automatic segmentation. In this study, we proposed a method to solve this issue. METHODS We used the data from the Multimodal Brain Tumor Image Segmentation Benchmark (BraTS) and the Japanese cohort (JC) datasets collected from 10 facilities. Three models for tumor segmentation were developed. The BraTS model was trained on the BraTS dataset, and the JC model was trained on the JC dataset; whereas, the Fine-tuning model was a fine-tuned BraTS model using the JC dataset. RESULTS MR images of 544 patients were obtained for the JC dataset. Half of the JC dataset was used for independent testing. The Dice coefficient score of the JC model for the JC dataset was 0.779± 0.137, whereas that of the BraTS model was remarkably lower (0.717 ± 0.207). The mean of the Fine-tuning models for the JC dataset was 0.769 ± 0.138. There was a significant difference between the BraTS and JC models (P < 0.0001) and the BraTS and Fine-tuning models (P = 0.002); however, no significant difference was observed between the JC and Fine-tuning models (P = 0.673). CONCLUSIONS Application of the BraTS model to heterogeneous datasets can significantly reduce its performance; however, fine-tuning can solve this issue. Since our fine-tuning method only requires less than 20 cases, this methodology is particularly useful for a facility where there are a few glioma cases.


Author(s):  
Alexandr N. Korabelnikov ◽  
◽  
Alexandr V. Kolsanov ◽  
Sergey S. Chaplygin ◽  
Pavel M. Zelter ◽  
...  

Anatomical structure segmentation on computed tomography (CT) is the key stage in medical visualization and computer diagnosis. Tumors are one of types of internal structures, for which the problem of automatic segmentation today has no solution fully satisfying by quality. The reason is high variance of tumor’s density and inability of using a priori anatomical information about shape. In this paper we propose automatic method of liver tumors segmentation based on convolution neural nets (CNN). Studying and validation have been performed on set of CT with liver and tumors segmentation ground truth. Average error (VOE) by cross-validation is 17.3%. Also there were considered algorithms of pre- and post-processing which increase accuracy and performance of segmentation procedure. Particularly the acceleration of the segmentation procedure with negligible decrease of quality has been reached 6 times.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 21323-21331
Author(s):  
Mingwei Cai ◽  
Jiazhou Wang ◽  
Qing Yang ◽  
Ying Guo ◽  
Zhen Zhang ◽  
...  

Author(s):  
Jiaxin Li ◽  
Houjin Chen ◽  
Yanfeng Li ◽  
Yahui Peng ◽  
Naxin Cai ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document