Convolutional neural network in nasopharyngeal carcinoma: how good is automatic delineation for primary tumor on a non-contrast-enhanced fat-suppressed T2-weighted MRI?

Author(s):  
Lun M. Wong ◽  
Qi Yong H. Ai ◽  
Frankie K. F. Mo ◽  
Darren M. C. Poon ◽  
Ann D. King
2018 ◽  
Vol 2018 ◽  
pp. 1-7 ◽  
Author(s):  
Qiaoliang Li ◽  
Yuzhen Xu ◽  
Zhewei Chen ◽  
Dexiang Liu ◽  
Shi-Ting Feng ◽  
...  

Objectives. To evaluate the application of a deep learning architecture, based on the convolutional neural network (CNN) technique, to perform automatic tumor segmentation of magnetic resonance imaging (MRI) for nasopharyngeal carcinoma (NPC). Materials and Methods. In this prospective study, 87 MRI containing tumor regions were acquired from newly diagnosed NPC patients. These 87 MRI were augmented to >60,000 images. The proposed CNN network is composed of two phases: feature representation and scores map reconstruction. We designed a stepwise scheme to train our CNN network. To evaluate the performance of our method, we used case-by-case leave-one-out cross-validation (LOOCV). The ground truth of tumor contouring was acquired by the consensus of two experienced radiologists. Results. The mean values of dice similarity coefficient, percent match, and their corresponding ratio with our method were 0.89±0.05, 0.90±0.04, and 0.84±0.06, respectively, all of which were better than reported values in the similar studies. Conclusions. We successfully established a segmentation method for NPC based on deep learning in contrast-enhanced magnetic resonance imaging. Further clinical trials with dedicated algorithms are warranted.


2019 ◽  
Vol 60 (5) ◽  
pp. 586-594 ◽  
Author(s):  
Iori Sumida ◽  
Taiki Magome ◽  
Hideki Kitamori ◽  
Indra J Das ◽  
Hajime Yamaguchi ◽  
...  

Abstract This study aims to produce non-contrast computed tomography (CT) images using a deep convolutional neural network (CNN) for imaging. Twenty-nine patients were selected. CT images were acquired without and with a contrast enhancement medium. The transverse images were divided into 64 × 64 pixels. This resulted in 14 723 patches in total for both non-contrast and contrast-enhanced CT image pairs. The proposed CNN model comprises five two-dimensional (2D) convolution layers with one shortcut path. For comparison, the U-net model, which comprises five 2D convolution layers interleaved with pooling and unpooling layers, was used. Training was performed in 24 patients and, for testing of trained models, another 5 patients were used. For quantitative evaluation, 50 regions of interest (ROIs) were selected on the reference contrast-enhanced image of the test data, and the mean pixel value of the ROIs was calculated. The mean pixel values of the ROIs at the same location on the reference non-contrast image and the predicted non-contrast image were calculated and those values were compared. Regarding the quantitative analysis, the difference in mean pixel value between the reference contrast-enhanced image and the predicted non-contrast image was significant (P < 0.0001) for both models. Significant differences in pixels (P < 0.0001) were found using the U-net model; in contrast, there was no significant difference using the proposed CNN model when comparing the reference non-contrast images and the predicted non-contrast images. Using the proposed CNN model, the contrast-enhanced region was satisfactorily reduced.


Sign in / Sign up

Export Citation Format

Share Document